00:00:00.001 Started by upstream project "autotest-per-patch" build number 127191 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "jbp-per-patch" build number 24328 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.104 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:07.068 The recommended git tool is: git 00:00:07.068 using credential 00000000-0000-0000-0000-000000000002 00:00:07.070 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:07.080 Fetching changes from the remote Git repository 00:00:07.081 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:07.093 Using shallow fetch with depth 1 00:00:07.093 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:07.093 > git --version # timeout=10 00:00:07.104 > git --version # 'git version 2.39.2' 00:00:07.104 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:07.114 Setting http proxy: proxy-dmz.intel.com:911 00:00:07.114 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/41/22241/26 # timeout=5 00:00:11.621 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:11.634 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:11.646 Checking out Revision 124d5bb683991a063807d96399433650600a89c8 (FETCH_HEAD) 00:00:11.646 > git config core.sparsecheckout # timeout=10 00:00:11.704 > git read-tree -mu HEAD # timeout=10 00:00:11.725 > git checkout -f 124d5bb683991a063807d96399433650600a89c8 # timeout=5 00:00:11.751 Commit message: "jenkins/jjb-config: Add release-build jobs to per-patch and nightly" 00:00:11.751 > git rev-list --no-walk bb4bbb76f2437bc8cff7e7e4a466bce7165cd7f0 # timeout=10 00:00:11.866 [Pipeline] Start of Pipeline 00:00:11.879 [Pipeline] library 00:00:11.880 Loading library shm_lib@master 00:00:11.880 Library shm_lib@master is cached. Copying from home. 00:00:11.893 [Pipeline] node 00:00:11.903 Running on WFP9 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:11.904 [Pipeline] { 00:00:11.912 [Pipeline] catchError 00:00:11.913 [Pipeline] { 00:00:11.923 [Pipeline] wrap 00:00:11.930 [Pipeline] { 00:00:11.936 [Pipeline] stage 00:00:11.938 [Pipeline] { (Prologue) 00:00:12.135 [Pipeline] sh 00:00:12.423 + logger -p user.info -t JENKINS-CI 00:00:12.441 [Pipeline] echo 00:00:12.442 Node: WFP9 00:00:12.451 [Pipeline] sh 00:00:12.757 [Pipeline] setCustomBuildProperty 00:00:12.768 [Pipeline] echo 00:00:12.770 Cleanup processes 00:00:12.776 [Pipeline] sh 00:00:13.058 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:13.058 495393 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:13.071 [Pipeline] sh 00:00:13.351 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:13.351 ++ grep -v 'sudo pgrep' 00:00:13.351 ++ awk '{print $1}' 00:00:13.351 + sudo kill -9 00:00:13.351 + true 00:00:13.368 [Pipeline] cleanWs 00:00:13.377 [WS-CLEANUP] Deleting project workspace... 00:00:13.377 [WS-CLEANUP] Deferred wipeout is used... 00:00:13.382 [WS-CLEANUP] done 00:00:13.387 [Pipeline] setCustomBuildProperty 00:00:13.402 [Pipeline] sh 00:00:13.682 + sudo git config --global --replace-all safe.directory '*' 00:00:13.767 [Pipeline] httpRequest 00:00:13.808 [Pipeline] echo 00:00:13.810 Sorcerer 10.211.164.101 is alive 00:00:13.819 [Pipeline] httpRequest 00:00:13.824 HttpMethod: GET 00:00:13.824 URL: http://10.211.164.101/packages/jbp_124d5bb683991a063807d96399433650600a89c8.tar.gz 00:00:13.825 Sending request to url: http://10.211.164.101/packages/jbp_124d5bb683991a063807d96399433650600a89c8.tar.gz 00:00:13.834 Response Code: HTTP/1.1 200 OK 00:00:13.834 Success: Status code 200 is in the accepted range: 200,404 00:00:13.835 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_124d5bb683991a063807d96399433650600a89c8.tar.gz 00:00:19.187 [Pipeline] sh 00:00:19.474 + tar --no-same-owner -xf jbp_124d5bb683991a063807d96399433650600a89c8.tar.gz 00:00:19.553 [Pipeline] httpRequest 00:00:19.575 [Pipeline] echo 00:00:19.577 Sorcerer 10.211.164.101 is alive 00:00:19.585 [Pipeline] httpRequest 00:00:19.590 HttpMethod: GET 00:00:19.590 URL: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:19.591 Sending request to url: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:19.606 Response Code: HTTP/1.1 200 OK 00:00:19.606 Success: Status code 200 is in the accepted range: 200,404 00:00:19.606 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:02:56.798 [Pipeline] sh 00:02:57.085 + tar --no-same-owner -xf spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:02:59.685 [Pipeline] sh 00:03:00.015 + git -C spdk log --oneline -n5 00:03:00.015 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:03:00.015 fc2398dfa raid: clear base bdev configure_cb after executing 00:03:00.015 5558f3f50 raid: complete bdev_raid_create after sb is written 00:03:00.015 d005e023b raid: fix empty slot not updated in sb after resize 00:03:00.015 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:03:00.099 [Pipeline] sh 00:03:00.485 + ip --json address 00:03:00.522 [Pipeline] readJSON 00:03:00.544 [Pipeline] echo 00:03:00.546 NIC with Beetle address is already setup (192.168.10.10) 00:03:00.552 [Pipeline] withCredentials 00:03:00.587 Masking supported pattern matches of $beetle_key 00:03:00.588 [Pipeline] { 00:03:00.608 [Pipeline] sh 00:03:00.892 + ssh -i **** -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ConnectionAttempts=5 root@192.168.10.11 'for gpio in {0..10}; do Beetle --SetGpio "$gpio" HIGH; done' 00:03:01.460 Warning: Permanently added '192.168.10.11' (ED25519) to the list of known hosts. 00:03:05.670 [Pipeline] } 00:03:05.700 [Pipeline] // withCredentials 00:03:05.705 [Pipeline] } 00:03:05.726 [Pipeline] // stage 00:03:05.734 [Pipeline] stage 00:03:05.737 [Pipeline] { (Prepare) 00:03:05.755 [Pipeline] writeFile 00:03:05.773 [Pipeline] sh 00:03:06.059 + logger -p user.info -t JENKINS-CI 00:03:06.071 [Pipeline] sh 00:03:06.355 + logger -p user.info -t JENKINS-CI 00:03:06.365 [Pipeline] sh 00:03:06.645 + cat autorun-spdk.conf 00:03:06.645 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:06.645 SPDK_TEST_NVMF=1 00:03:06.645 SPDK_TEST_NVME_CLI=1 00:03:06.645 SPDK_TEST_NVMF_NICS=mlx5 00:03:06.645 SPDK_RUN_UBSAN=1 00:03:06.645 NET_TYPE=phy 00:03:06.652 RUN_NIGHTLY=0 00:03:06.656 [Pipeline] readFile 00:03:06.680 [Pipeline] withEnv 00:03:06.682 [Pipeline] { 00:03:06.696 [Pipeline] sh 00:03:07.038 + set -ex 00:03:07.038 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:03:07.038 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:03:07.038 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:07.038 ++ SPDK_TEST_NVMF=1 00:03:07.038 ++ SPDK_TEST_NVME_CLI=1 00:03:07.038 ++ SPDK_TEST_NVMF_NICS=mlx5 00:03:07.038 ++ SPDK_RUN_UBSAN=1 00:03:07.038 ++ NET_TYPE=phy 00:03:07.038 ++ RUN_NIGHTLY=0 00:03:07.038 + case $SPDK_TEST_NVMF_NICS in 00:03:07.038 + DRIVERS=mlx5_ib 00:03:07.038 + [[ -n mlx5_ib ]] 00:03:07.038 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:03:07.038 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:03:10.332 rmmod: ERROR: Module irdma is not currently loaded 00:03:10.332 rmmod: ERROR: Module i40iw is not currently loaded 00:03:10.332 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:03:10.332 + true 00:03:10.332 + for D in $DRIVERS 00:03:10.332 + sudo modprobe mlx5_ib 00:03:10.332 + exit 0 00:03:10.341 [Pipeline] } 00:03:10.356 [Pipeline] // withEnv 00:03:10.361 [Pipeline] } 00:03:10.378 [Pipeline] // stage 00:03:10.386 [Pipeline] catchError 00:03:10.388 [Pipeline] { 00:03:10.403 [Pipeline] timeout 00:03:10.403 Timeout set to expire in 1 hr 0 min 00:03:10.404 [Pipeline] { 00:03:10.415 [Pipeline] stage 00:03:10.418 [Pipeline] { (Tests) 00:03:10.432 [Pipeline] sh 00:03:10.714 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:03:10.714 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:03:10.714 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:03:10.714 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:03:10.714 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:10.714 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:03:10.714 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:03:10.714 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:03:10.714 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:03:10.714 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:03:10.714 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:03:10.714 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:03:10.714 + source /etc/os-release 00:03:10.714 ++ NAME='Fedora Linux' 00:03:10.714 ++ VERSION='39 (Cloud Edition)' 00:03:10.714 ++ ID=fedora 00:03:10.714 ++ VERSION_ID=39 00:03:10.714 ++ VERSION_CODENAME= 00:03:10.714 ++ PLATFORM_ID=platform:f39 00:03:10.714 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:10.714 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:10.714 ++ LOGO=fedora-logo-icon 00:03:10.714 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:10.714 ++ HOME_URL=https://fedoraproject.org/ 00:03:10.714 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:10.714 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:10.714 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:10.714 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:10.714 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:10.714 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:10.714 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:10.714 ++ SUPPORT_END=2024-11-12 00:03:10.714 ++ VARIANT='Cloud Edition' 00:03:10.714 ++ VARIANT_ID=cloud 00:03:10.714 + uname -a 00:03:10.714 Linux spdk-wfp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jun 12 10:26:11 UTC 2024 x86_64 GNU/Linux 00:03:10.714 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:13.250 Hugepages 00:03:13.250 node hugesize free / total 00:03:13.250 node0 1048576kB 0 / 0 00:03:13.250 node0 2048kB 0 / 0 00:03:13.250 node1 1048576kB 0 / 0 00:03:13.250 node1 2048kB 0 / 0 00:03:13.250 00:03:13.250 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:13.250 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:13.250 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:13.250 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:13.250 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:13.250 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:13.250 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:13.250 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:13.250 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:13.250 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:13.250 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:13.250 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:13.250 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:13.250 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:13.250 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:13.251 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:13.251 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:13.251 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:13.251 + rm -f /tmp/spdk-ld-path 00:03:13.251 + source autorun-spdk.conf 00:03:13.251 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:13.251 ++ SPDK_TEST_NVMF=1 00:03:13.251 ++ SPDK_TEST_NVME_CLI=1 00:03:13.251 ++ SPDK_TEST_NVMF_NICS=mlx5 00:03:13.251 ++ SPDK_RUN_UBSAN=1 00:03:13.251 ++ NET_TYPE=phy 00:03:13.251 ++ RUN_NIGHTLY=0 00:03:13.251 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:13.251 + [[ -n '' ]] 00:03:13.251 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:13.251 + for M in /var/spdk/build-*-manifest.txt 00:03:13.251 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:13.251 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:03:13.251 + for M in /var/spdk/build-*-manifest.txt 00:03:13.251 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:13.251 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:03:13.251 + for M in /var/spdk/build-*-manifest.txt 00:03:13.251 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:13.251 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:03:13.251 ++ uname 00:03:13.511 + [[ Linux == \L\i\n\u\x ]] 00:03:13.511 + sudo dmesg -T 00:03:13.511 + sudo dmesg --clear 00:03:13.511 + dmesg_pid=496895 00:03:13.511 + [[ Fedora Linux == FreeBSD ]] 00:03:13.511 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:13.511 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:13.511 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:13.511 + [[ -x /usr/src/fio-static/fio ]] 00:03:13.511 + export FIO_BIN=/usr/src/fio-static/fio 00:03:13.511 + FIO_BIN=/usr/src/fio-static/fio 00:03:13.511 + sudo dmesg -Tw 00:03:13.511 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:13.511 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:13.511 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:13.511 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:13.511 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:13.511 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:13.511 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:13.511 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:13.511 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:03:13.511 Test configuration: 00:03:13.511 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:13.511 SPDK_TEST_NVMF=1 00:03:13.511 SPDK_TEST_NVME_CLI=1 00:03:13.511 SPDK_TEST_NVMF_NICS=mlx5 00:03:13.511 SPDK_RUN_UBSAN=1 00:03:13.511 NET_TYPE=phy 00:03:13.511 RUN_NIGHTLY=0 18:54:05 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:03:13.511 18:54:05 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:13.511 18:54:05 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:13.511 18:54:05 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:13.511 18:54:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:13.511 18:54:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:13.511 18:54:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:13.511 18:54:05 -- paths/export.sh@5 -- $ export PATH 00:03:13.511 18:54:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:13.511 18:54:05 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:03:13.511 18:54:05 -- common/autobuild_common.sh@447 -- $ date +%s 00:03:13.511 18:54:05 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721926445.XXXXXX 00:03:13.511 18:54:05 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721926445.DOlAsS 00:03:13.511 18:54:05 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:03:13.511 18:54:05 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:03:13.511 18:54:05 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:03:13.511 18:54:05 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:13.511 18:54:05 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:13.511 18:54:05 -- common/autobuild_common.sh@463 -- $ get_config_params 00:03:13.511 18:54:05 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:03:13.511 18:54:05 -- common/autotest_common.sh@10 -- $ set +x 00:03:13.511 18:54:05 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:03:13.511 18:54:05 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:03:13.511 18:54:05 -- pm/common@17 -- $ local monitor 00:03:13.511 18:54:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:13.511 18:54:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:13.511 18:54:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:13.511 18:54:05 -- pm/common@21 -- $ date +%s 00:03:13.511 18:54:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:13.511 18:54:05 -- pm/common@21 -- $ date +%s 00:03:13.511 18:54:05 -- pm/common@25 -- $ sleep 1 00:03:13.511 18:54:05 -- pm/common@21 -- $ date +%s 00:03:13.511 18:54:05 -- pm/common@21 -- $ date +%s 00:03:13.511 18:54:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721926445 00:03:13.511 18:54:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721926445 00:03:13.511 18:54:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721926445 00:03:13.511 18:54:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721926445 00:03:13.511 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721926445_collect-vmstat.pm.log 00:03:13.512 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721926445_collect-cpu-load.pm.log 00:03:13.512 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721926445_collect-cpu-temp.pm.log 00:03:13.512 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721926445_collect-bmc-pm.bmc.pm.log 00:03:14.892 18:54:06 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:03:14.892 18:54:06 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:14.892 18:54:06 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:14.892 18:54:06 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:14.892 18:54:06 -- spdk/autobuild.sh@16 -- $ date -u 00:03:14.892 Thu Jul 25 04:54:06 PM UTC 2024 00:03:14.892 18:54:06 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:14.892 v24.09-pre-321-g704257090 00:03:14.892 18:54:06 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:14.892 18:54:06 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:14.892 18:54:06 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:14.892 18:54:06 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:14.892 18:54:06 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:14.892 18:54:06 -- common/autotest_common.sh@10 -- $ set +x 00:03:14.892 ************************************ 00:03:14.892 START TEST ubsan 00:03:14.892 ************************************ 00:03:14.892 18:54:06 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:03:14.892 using ubsan 00:03:14.892 00:03:14.892 real 0m0.000s 00:03:14.892 user 0m0.000s 00:03:14.892 sys 0m0.000s 00:03:14.892 18:54:06 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:14.892 18:54:06 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:14.892 ************************************ 00:03:14.892 END TEST ubsan 00:03:14.892 ************************************ 00:03:14.892 18:54:07 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:14.892 18:54:07 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:14.892 18:54:07 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:14.892 18:54:07 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:14.892 18:54:07 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:14.892 18:54:07 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:14.892 18:54:07 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:14.892 18:54:07 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:14.892 18:54:07 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:03:14.892 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:03:14.892 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:03:15.151 Using 'verbs' RDMA provider 00:03:27.931 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:40.144 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:40.144 Creating mk/config.mk...done. 00:03:40.144 Creating mk/cc.flags.mk...done. 00:03:40.144 Type 'make' to build. 00:03:40.144 18:54:31 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:03:40.144 18:54:31 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:40.144 18:54:31 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:40.144 18:54:31 -- common/autotest_common.sh@10 -- $ set +x 00:03:40.144 ************************************ 00:03:40.144 START TEST make 00:03:40.144 ************************************ 00:03:40.144 18:54:31 make -- common/autotest_common.sh@1125 -- $ make -j96 00:03:40.144 make[1]: Nothing to be done for 'all'. 00:03:48.271 The Meson build system 00:03:48.271 Version: 1.4.1 00:03:48.271 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:03:48.271 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:03:48.271 Build type: native build 00:03:48.271 Program cat found: YES (/usr/bin/cat) 00:03:48.271 Project name: DPDK 00:03:48.271 Project version: 24.03.0 00:03:48.271 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:48.271 C linker for the host machine: cc ld.bfd 2.40-14 00:03:48.271 Host machine cpu family: x86_64 00:03:48.271 Host machine cpu: x86_64 00:03:48.271 Message: ## Building in Developer Mode ## 00:03:48.271 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:48.271 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:48.271 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:48.272 Program python3 found: YES (/usr/bin/python3) 00:03:48.272 Program cat found: YES (/usr/bin/cat) 00:03:48.272 Compiler for C supports arguments -march=native: YES 00:03:48.272 Checking for size of "void *" : 8 00:03:48.272 Checking for size of "void *" : 8 (cached) 00:03:48.272 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:48.272 Library m found: YES 00:03:48.272 Library numa found: YES 00:03:48.272 Has header "numaif.h" : YES 00:03:48.272 Library fdt found: NO 00:03:48.272 Library execinfo found: NO 00:03:48.272 Has header "execinfo.h" : YES 00:03:48.272 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:48.272 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:48.272 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:48.272 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:48.272 Run-time dependency openssl found: YES 3.1.1 00:03:48.272 Run-time dependency libpcap found: YES 1.10.4 00:03:48.272 Has header "pcap.h" with dependency libpcap: YES 00:03:48.272 Compiler for C supports arguments -Wcast-qual: YES 00:03:48.272 Compiler for C supports arguments -Wdeprecated: YES 00:03:48.272 Compiler for C supports arguments -Wformat: YES 00:03:48.272 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:48.272 Compiler for C supports arguments -Wformat-security: NO 00:03:48.272 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:48.272 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:48.272 Compiler for C supports arguments -Wnested-externs: YES 00:03:48.272 Compiler for C supports arguments -Wold-style-definition: YES 00:03:48.272 Compiler for C supports arguments -Wpointer-arith: YES 00:03:48.272 Compiler for C supports arguments -Wsign-compare: YES 00:03:48.272 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:48.272 Compiler for C supports arguments -Wundef: YES 00:03:48.272 Compiler for C supports arguments -Wwrite-strings: YES 00:03:48.272 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:48.272 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:48.272 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:48.272 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:48.272 Program objdump found: YES (/usr/bin/objdump) 00:03:48.272 Compiler for C supports arguments -mavx512f: YES 00:03:48.272 Checking if "AVX512 checking" compiles: YES 00:03:48.272 Fetching value of define "__SSE4_2__" : 1 00:03:48.272 Fetching value of define "__AES__" : 1 00:03:48.272 Fetching value of define "__AVX__" : 1 00:03:48.272 Fetching value of define "__AVX2__" : 1 00:03:48.272 Fetching value of define "__AVX512BW__" : 1 00:03:48.272 Fetching value of define "__AVX512CD__" : 1 00:03:48.272 Fetching value of define "__AVX512DQ__" : 1 00:03:48.272 Fetching value of define "__AVX512F__" : 1 00:03:48.272 Fetching value of define "__AVX512VL__" : 1 00:03:48.272 Fetching value of define "__PCLMUL__" : 1 00:03:48.272 Fetching value of define "__RDRND__" : 1 00:03:48.272 Fetching value of define "__RDSEED__" : 1 00:03:48.272 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:48.272 Fetching value of define "__znver1__" : (undefined) 00:03:48.272 Fetching value of define "__znver2__" : (undefined) 00:03:48.272 Fetching value of define "__znver3__" : (undefined) 00:03:48.272 Fetching value of define "__znver4__" : (undefined) 00:03:48.272 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:48.272 Message: lib/log: Defining dependency "log" 00:03:48.272 Message: lib/kvargs: Defining dependency "kvargs" 00:03:48.272 Message: lib/telemetry: Defining dependency "telemetry" 00:03:48.272 Checking for function "getentropy" : NO 00:03:48.272 Message: lib/eal: Defining dependency "eal" 00:03:48.272 Message: lib/ring: Defining dependency "ring" 00:03:48.272 Message: lib/rcu: Defining dependency "rcu" 00:03:48.272 Message: lib/mempool: Defining dependency "mempool" 00:03:48.272 Message: lib/mbuf: Defining dependency "mbuf" 00:03:48.272 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:48.272 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:48.272 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:48.272 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:48.272 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:48.272 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:48.272 Compiler for C supports arguments -mpclmul: YES 00:03:48.272 Compiler for C supports arguments -maes: YES 00:03:48.272 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:48.272 Compiler for C supports arguments -mavx512bw: YES 00:03:48.272 Compiler for C supports arguments -mavx512dq: YES 00:03:48.272 Compiler for C supports arguments -mavx512vl: YES 00:03:48.272 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:48.272 Compiler for C supports arguments -mavx2: YES 00:03:48.272 Compiler for C supports arguments -mavx: YES 00:03:48.272 Message: lib/net: Defining dependency "net" 00:03:48.272 Message: lib/meter: Defining dependency "meter" 00:03:48.272 Message: lib/ethdev: Defining dependency "ethdev" 00:03:48.272 Message: lib/pci: Defining dependency "pci" 00:03:48.272 Message: lib/cmdline: Defining dependency "cmdline" 00:03:48.272 Message: lib/hash: Defining dependency "hash" 00:03:48.272 Message: lib/timer: Defining dependency "timer" 00:03:48.272 Message: lib/compressdev: Defining dependency "compressdev" 00:03:48.272 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:48.272 Message: lib/dmadev: Defining dependency "dmadev" 00:03:48.272 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:48.272 Message: lib/power: Defining dependency "power" 00:03:48.272 Message: lib/reorder: Defining dependency "reorder" 00:03:48.272 Message: lib/security: Defining dependency "security" 00:03:48.272 Has header "linux/userfaultfd.h" : YES 00:03:48.272 Has header "linux/vduse.h" : YES 00:03:48.272 Message: lib/vhost: Defining dependency "vhost" 00:03:48.272 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:48.272 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:48.272 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:48.272 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:48.272 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:48.272 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:48.272 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:48.272 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:48.272 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:48.272 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:48.272 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:48.272 Configuring doxy-api-html.conf using configuration 00:03:48.272 Configuring doxy-api-man.conf using configuration 00:03:48.272 Program mandb found: YES (/usr/bin/mandb) 00:03:48.272 Program sphinx-build found: NO 00:03:48.272 Configuring rte_build_config.h using configuration 00:03:48.272 Message: 00:03:48.272 ================= 00:03:48.272 Applications Enabled 00:03:48.272 ================= 00:03:48.272 00:03:48.272 apps: 00:03:48.272 00:03:48.272 00:03:48.272 Message: 00:03:48.272 ================= 00:03:48.272 Libraries Enabled 00:03:48.272 ================= 00:03:48.272 00:03:48.272 libs: 00:03:48.272 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:48.272 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:48.272 cryptodev, dmadev, power, reorder, security, vhost, 00:03:48.272 00:03:48.272 Message: 00:03:48.272 =============== 00:03:48.272 Drivers Enabled 00:03:48.272 =============== 00:03:48.272 00:03:48.272 common: 00:03:48.272 00:03:48.272 bus: 00:03:48.272 pci, vdev, 00:03:48.272 mempool: 00:03:48.272 ring, 00:03:48.272 dma: 00:03:48.272 00:03:48.272 net: 00:03:48.272 00:03:48.272 crypto: 00:03:48.272 00:03:48.272 compress: 00:03:48.272 00:03:48.272 vdpa: 00:03:48.272 00:03:48.272 00:03:48.272 Message: 00:03:48.272 ================= 00:03:48.272 Content Skipped 00:03:48.272 ================= 00:03:48.272 00:03:48.272 apps: 00:03:48.272 dumpcap: explicitly disabled via build config 00:03:48.272 graph: explicitly disabled via build config 00:03:48.272 pdump: explicitly disabled via build config 00:03:48.272 proc-info: explicitly disabled via build config 00:03:48.272 test-acl: explicitly disabled via build config 00:03:48.272 test-bbdev: explicitly disabled via build config 00:03:48.272 test-cmdline: explicitly disabled via build config 00:03:48.272 test-compress-perf: explicitly disabled via build config 00:03:48.272 test-crypto-perf: explicitly disabled via build config 00:03:48.272 test-dma-perf: explicitly disabled via build config 00:03:48.272 test-eventdev: explicitly disabled via build config 00:03:48.272 test-fib: explicitly disabled via build config 00:03:48.272 test-flow-perf: explicitly disabled via build config 00:03:48.272 test-gpudev: explicitly disabled via build config 00:03:48.272 test-mldev: explicitly disabled via build config 00:03:48.272 test-pipeline: explicitly disabled via build config 00:03:48.272 test-pmd: explicitly disabled via build config 00:03:48.272 test-regex: explicitly disabled via build config 00:03:48.272 test-sad: explicitly disabled via build config 00:03:48.272 test-security-perf: explicitly disabled via build config 00:03:48.272 00:03:48.272 libs: 00:03:48.273 argparse: explicitly disabled via build config 00:03:48.273 metrics: explicitly disabled via build config 00:03:48.273 acl: explicitly disabled via build config 00:03:48.273 bbdev: explicitly disabled via build config 00:03:48.273 bitratestats: explicitly disabled via build config 00:03:48.273 bpf: explicitly disabled via build config 00:03:48.273 cfgfile: explicitly disabled via build config 00:03:48.273 distributor: explicitly disabled via build config 00:03:48.273 efd: explicitly disabled via build config 00:03:48.273 eventdev: explicitly disabled via build config 00:03:48.273 dispatcher: explicitly disabled via build config 00:03:48.273 gpudev: explicitly disabled via build config 00:03:48.273 gro: explicitly disabled via build config 00:03:48.273 gso: explicitly disabled via build config 00:03:48.273 ip_frag: explicitly disabled via build config 00:03:48.273 jobstats: explicitly disabled via build config 00:03:48.273 latencystats: explicitly disabled via build config 00:03:48.273 lpm: explicitly disabled via build config 00:03:48.273 member: explicitly disabled via build config 00:03:48.273 pcapng: explicitly disabled via build config 00:03:48.273 rawdev: explicitly disabled via build config 00:03:48.273 regexdev: explicitly disabled via build config 00:03:48.273 mldev: explicitly disabled via build config 00:03:48.273 rib: explicitly disabled via build config 00:03:48.273 sched: explicitly disabled via build config 00:03:48.273 stack: explicitly disabled via build config 00:03:48.273 ipsec: explicitly disabled via build config 00:03:48.273 pdcp: explicitly disabled via build config 00:03:48.273 fib: explicitly disabled via build config 00:03:48.273 port: explicitly disabled via build config 00:03:48.273 pdump: explicitly disabled via build config 00:03:48.273 table: explicitly disabled via build config 00:03:48.273 pipeline: explicitly disabled via build config 00:03:48.273 graph: explicitly disabled via build config 00:03:48.273 node: explicitly disabled via build config 00:03:48.273 00:03:48.273 drivers: 00:03:48.273 common/cpt: not in enabled drivers build config 00:03:48.273 common/dpaax: not in enabled drivers build config 00:03:48.273 common/iavf: not in enabled drivers build config 00:03:48.273 common/idpf: not in enabled drivers build config 00:03:48.273 common/ionic: not in enabled drivers build config 00:03:48.273 common/mvep: not in enabled drivers build config 00:03:48.273 common/octeontx: not in enabled drivers build config 00:03:48.273 bus/auxiliary: not in enabled drivers build config 00:03:48.273 bus/cdx: not in enabled drivers build config 00:03:48.273 bus/dpaa: not in enabled drivers build config 00:03:48.273 bus/fslmc: not in enabled drivers build config 00:03:48.273 bus/ifpga: not in enabled drivers build config 00:03:48.273 bus/platform: not in enabled drivers build config 00:03:48.273 bus/uacce: not in enabled drivers build config 00:03:48.273 bus/vmbus: not in enabled drivers build config 00:03:48.273 common/cnxk: not in enabled drivers build config 00:03:48.273 common/mlx5: not in enabled drivers build config 00:03:48.273 common/nfp: not in enabled drivers build config 00:03:48.273 common/nitrox: not in enabled drivers build config 00:03:48.273 common/qat: not in enabled drivers build config 00:03:48.273 common/sfc_efx: not in enabled drivers build config 00:03:48.273 mempool/bucket: not in enabled drivers build config 00:03:48.273 mempool/cnxk: not in enabled drivers build config 00:03:48.273 mempool/dpaa: not in enabled drivers build config 00:03:48.273 mempool/dpaa2: not in enabled drivers build config 00:03:48.273 mempool/octeontx: not in enabled drivers build config 00:03:48.273 mempool/stack: not in enabled drivers build config 00:03:48.273 dma/cnxk: not in enabled drivers build config 00:03:48.273 dma/dpaa: not in enabled drivers build config 00:03:48.273 dma/dpaa2: not in enabled drivers build config 00:03:48.273 dma/hisilicon: not in enabled drivers build config 00:03:48.273 dma/idxd: not in enabled drivers build config 00:03:48.273 dma/ioat: not in enabled drivers build config 00:03:48.273 dma/skeleton: not in enabled drivers build config 00:03:48.273 net/af_packet: not in enabled drivers build config 00:03:48.273 net/af_xdp: not in enabled drivers build config 00:03:48.273 net/ark: not in enabled drivers build config 00:03:48.273 net/atlantic: not in enabled drivers build config 00:03:48.273 net/avp: not in enabled drivers build config 00:03:48.273 net/axgbe: not in enabled drivers build config 00:03:48.273 net/bnx2x: not in enabled drivers build config 00:03:48.273 net/bnxt: not in enabled drivers build config 00:03:48.273 net/bonding: not in enabled drivers build config 00:03:48.273 net/cnxk: not in enabled drivers build config 00:03:48.273 net/cpfl: not in enabled drivers build config 00:03:48.273 net/cxgbe: not in enabled drivers build config 00:03:48.273 net/dpaa: not in enabled drivers build config 00:03:48.273 net/dpaa2: not in enabled drivers build config 00:03:48.273 net/e1000: not in enabled drivers build config 00:03:48.273 net/ena: not in enabled drivers build config 00:03:48.273 net/enetc: not in enabled drivers build config 00:03:48.273 net/enetfec: not in enabled drivers build config 00:03:48.273 net/enic: not in enabled drivers build config 00:03:48.273 net/failsafe: not in enabled drivers build config 00:03:48.273 net/fm10k: not in enabled drivers build config 00:03:48.273 net/gve: not in enabled drivers build config 00:03:48.273 net/hinic: not in enabled drivers build config 00:03:48.273 net/hns3: not in enabled drivers build config 00:03:48.273 net/i40e: not in enabled drivers build config 00:03:48.273 net/iavf: not in enabled drivers build config 00:03:48.273 net/ice: not in enabled drivers build config 00:03:48.273 net/idpf: not in enabled drivers build config 00:03:48.273 net/igc: not in enabled drivers build config 00:03:48.273 net/ionic: not in enabled drivers build config 00:03:48.273 net/ipn3ke: not in enabled drivers build config 00:03:48.273 net/ixgbe: not in enabled drivers build config 00:03:48.273 net/mana: not in enabled drivers build config 00:03:48.273 net/memif: not in enabled drivers build config 00:03:48.273 net/mlx4: not in enabled drivers build config 00:03:48.273 net/mlx5: not in enabled drivers build config 00:03:48.273 net/mvneta: not in enabled drivers build config 00:03:48.273 net/mvpp2: not in enabled drivers build config 00:03:48.273 net/netvsc: not in enabled drivers build config 00:03:48.273 net/nfb: not in enabled drivers build config 00:03:48.273 net/nfp: not in enabled drivers build config 00:03:48.273 net/ngbe: not in enabled drivers build config 00:03:48.273 net/null: not in enabled drivers build config 00:03:48.273 net/octeontx: not in enabled drivers build config 00:03:48.273 net/octeon_ep: not in enabled drivers build config 00:03:48.273 net/pcap: not in enabled drivers build config 00:03:48.273 net/pfe: not in enabled drivers build config 00:03:48.273 net/qede: not in enabled drivers build config 00:03:48.273 net/ring: not in enabled drivers build config 00:03:48.273 net/sfc: not in enabled drivers build config 00:03:48.273 net/softnic: not in enabled drivers build config 00:03:48.273 net/tap: not in enabled drivers build config 00:03:48.273 net/thunderx: not in enabled drivers build config 00:03:48.273 net/txgbe: not in enabled drivers build config 00:03:48.273 net/vdev_netvsc: not in enabled drivers build config 00:03:48.273 net/vhost: not in enabled drivers build config 00:03:48.273 net/virtio: not in enabled drivers build config 00:03:48.273 net/vmxnet3: not in enabled drivers build config 00:03:48.273 raw/*: missing internal dependency, "rawdev" 00:03:48.273 crypto/armv8: not in enabled drivers build config 00:03:48.273 crypto/bcmfs: not in enabled drivers build config 00:03:48.273 crypto/caam_jr: not in enabled drivers build config 00:03:48.273 crypto/ccp: not in enabled drivers build config 00:03:48.273 crypto/cnxk: not in enabled drivers build config 00:03:48.273 crypto/dpaa_sec: not in enabled drivers build config 00:03:48.273 crypto/dpaa2_sec: not in enabled drivers build config 00:03:48.273 crypto/ipsec_mb: not in enabled drivers build config 00:03:48.273 crypto/mlx5: not in enabled drivers build config 00:03:48.273 crypto/mvsam: not in enabled drivers build config 00:03:48.273 crypto/nitrox: not in enabled drivers build config 00:03:48.273 crypto/null: not in enabled drivers build config 00:03:48.273 crypto/octeontx: not in enabled drivers build config 00:03:48.273 crypto/openssl: not in enabled drivers build config 00:03:48.273 crypto/scheduler: not in enabled drivers build config 00:03:48.273 crypto/uadk: not in enabled drivers build config 00:03:48.273 crypto/virtio: not in enabled drivers build config 00:03:48.273 compress/isal: not in enabled drivers build config 00:03:48.273 compress/mlx5: not in enabled drivers build config 00:03:48.273 compress/nitrox: not in enabled drivers build config 00:03:48.273 compress/octeontx: not in enabled drivers build config 00:03:48.273 compress/zlib: not in enabled drivers build config 00:03:48.273 regex/*: missing internal dependency, "regexdev" 00:03:48.273 ml/*: missing internal dependency, "mldev" 00:03:48.273 vdpa/ifc: not in enabled drivers build config 00:03:48.273 vdpa/mlx5: not in enabled drivers build config 00:03:48.273 vdpa/nfp: not in enabled drivers build config 00:03:48.273 vdpa/sfc: not in enabled drivers build config 00:03:48.273 event/*: missing internal dependency, "eventdev" 00:03:48.273 baseband/*: missing internal dependency, "bbdev" 00:03:48.273 gpu/*: missing internal dependency, "gpudev" 00:03:48.273 00:03:48.273 00:03:48.273 Build targets in project: 85 00:03:48.273 00:03:48.273 DPDK 24.03.0 00:03:48.273 00:03:48.273 User defined options 00:03:48.273 buildtype : debug 00:03:48.273 default_library : shared 00:03:48.273 libdir : lib 00:03:48.273 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:03:48.273 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:48.273 c_link_args : 00:03:48.273 cpu_instruction_set: native 00:03:48.274 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:48.274 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:48.274 enable_docs : false 00:03:48.274 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:48.274 enable_kmods : false 00:03:48.274 max_lcores : 128 00:03:48.274 tests : false 00:03:48.274 00:03:48.274 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:48.274 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:03:48.274 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:48.274 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:48.274 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:48.535 [4/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:48.535 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:48.535 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:48.535 [7/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:48.535 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:48.535 [9/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:48.535 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:48.535 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:48.535 [12/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:48.535 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:48.535 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:48.535 [15/268] Linking static target lib/librte_kvargs.a 00:03:48.535 [16/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:48.535 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:48.535 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:48.535 [19/268] Linking static target lib/librte_log.a 00:03:48.535 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:48.535 [21/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:48.535 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:48.535 [23/268] Linking static target lib/librte_pci.a 00:03:48.535 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:48.795 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:48.795 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:48.795 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:48.795 [28/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:48.795 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:48.795 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:48.795 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:48.795 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:48.795 [33/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:48.795 [34/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:48.795 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:48.795 [36/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:48.795 [37/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:48.795 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:48.795 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:48.795 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:48.795 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:48.795 [42/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:48.795 [43/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:48.795 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:48.795 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:48.795 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:48.795 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:48.795 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:48.795 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:48.795 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:48.795 [51/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.795 [52/268] Linking static target lib/librte_meter.a 00:03:48.795 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:48.795 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:48.795 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:48.795 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:48.796 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:48.796 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:49.056 [59/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:49.056 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:49.056 [61/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:49.056 [62/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:49.056 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:49.056 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:49.056 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:49.056 [66/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:49.056 [67/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:49.056 [68/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:49.056 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:49.056 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:49.056 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:49.056 [72/268] Linking static target lib/librte_ring.a 00:03:49.056 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:49.056 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:49.056 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:49.056 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:49.056 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:49.056 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:49.056 [79/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:49.056 [80/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:49.056 [81/268] Linking static target lib/librte_telemetry.a 00:03:49.056 [82/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:49.056 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:49.056 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:49.056 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:49.056 [86/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:49.056 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:49.056 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:49.056 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:49.056 [90/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:49.056 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:49.056 [92/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:49.056 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:49.056 [94/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.056 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:49.056 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:49.056 [97/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:49.056 [98/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:49.056 [99/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:49.056 [100/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:49.056 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:49.056 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:49.056 [103/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:49.056 [104/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:49.056 [105/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:49.056 [106/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:49.056 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:49.056 [108/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:49.056 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:49.056 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:49.056 [111/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:49.056 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:49.056 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:49.056 [114/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:49.056 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:49.056 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:49.056 [117/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:49.056 [118/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:49.056 [119/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:49.056 [120/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:49.056 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:49.056 [122/268] Linking static target lib/librte_mempool.a 00:03:49.056 [123/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:49.056 [124/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:49.056 [125/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:49.056 [126/268] Linking static target lib/librte_cmdline.a 00:03:49.056 [127/268] Linking static target lib/librte_net.a 00:03:49.056 [128/268] Linking static target lib/librte_rcu.a 00:03:49.056 [129/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:49.056 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:49.056 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:49.056 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:49.315 [133/268] Linking static target lib/librte_eal.a 00:03:49.315 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:49.315 [135/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.315 [136/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.315 [137/268] Linking target lib/librte_log.so.24.1 00:03:49.315 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:49.315 [139/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.315 [140/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:49.315 [141/268] Linking static target lib/librte_mbuf.a 00:03:49.315 [142/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:49.315 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:49.315 [144/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:49.315 [145/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:49.315 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:49.315 [147/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:49.315 [148/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:49.315 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:49.315 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:49.315 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:49.315 [152/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:49.315 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:49.315 [154/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:49.315 [155/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:49.315 [156/268] Linking static target lib/librte_timer.a 00:03:49.315 [157/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:49.315 [158/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.315 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:49.315 [160/268] Linking target lib/librte_kvargs.so.24.1 00:03:49.315 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:49.315 [162/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.315 [163/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:49.315 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:49.315 [165/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:49.315 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:49.315 [167/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:49.315 [168/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:49.574 [169/268] Linking target lib/librte_telemetry.so.24.1 00:03:49.574 [170/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:49.574 [171/268] Linking static target lib/librte_compressdev.a 00:03:49.574 [172/268] Linking static target lib/librte_reorder.a 00:03:49.574 [173/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.574 [174/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:49.574 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:49.574 [176/268] Linking static target lib/librte_dmadev.a 00:03:49.574 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:49.574 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:49.574 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:49.574 [180/268] Linking static target lib/librte_power.a 00:03:49.574 [181/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:49.574 [182/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:49.574 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:49.574 [184/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:49.574 [185/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:49.574 [186/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:49.574 [187/268] Linking static target lib/librte_security.a 00:03:49.574 [188/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:49.574 [189/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:49.574 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:49.574 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:49.574 [192/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:49.574 [193/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:49.574 [194/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:49.574 [195/268] Linking static target lib/librte_hash.a 00:03:49.574 [196/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:49.574 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:49.833 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:49.833 [199/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:49.833 [200/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:49.833 [201/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:49.833 [202/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:49.833 [203/268] Linking static target drivers/librte_bus_pci.a 00:03:49.833 [204/268] Linking static target lib/librte_cryptodev.a 00:03:49.833 [205/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:49.833 [206/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.833 [207/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:49.833 [208/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:49.833 [209/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:49.833 [210/268] Linking static target drivers/librte_bus_vdev.a 00:03:49.833 [211/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:49.833 [212/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:49.833 [213/268] Linking static target drivers/librte_mempool_ring.a 00:03:49.833 [214/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.833 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.833 [216/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:50.092 [217/268] Linking static target lib/librte_ethdev.a 00:03:50.092 [218/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.092 [219/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.092 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.092 [221/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.092 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.092 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.351 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:50.351 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.609 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.609 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.546 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:51.546 [229/268] Linking static target lib/librte_vhost.a 00:03:51.546 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.452 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.728 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:59.666 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:59.666 [234/268] Linking target lib/librte_eal.so.24.1 00:03:59.666 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:59.666 [236/268] Linking target lib/librte_ring.so.24.1 00:03:59.666 [237/268] Linking target lib/librte_meter.so.24.1 00:03:59.666 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:59.666 [239/268] Linking target lib/librte_pci.so.24.1 00:03:59.666 [240/268] Linking target lib/librte_timer.so.24.1 00:03:59.666 [241/268] Linking target lib/librte_dmadev.so.24.1 00:03:59.925 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:59.925 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:59.925 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:59.925 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:59.925 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:59.925 [247/268] Linking target lib/librte_mempool.so.24.1 00:03:59.925 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:59.925 [249/268] Linking target lib/librte_rcu.so.24.1 00:03:59.925 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:59.925 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:00.184 [252/268] Linking target lib/librte_mbuf.so.24.1 00:04:00.184 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:00.184 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:00.184 [255/268] Linking target lib/librte_net.so.24.1 00:04:00.184 [256/268] Linking target lib/librte_compressdev.so.24.1 00:04:00.184 [257/268] Linking target lib/librte_reorder.so.24.1 00:04:00.184 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:04:00.443 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:00.443 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:00.443 [261/268] Linking target lib/librte_cmdline.so.24.1 00:04:00.443 [262/268] Linking target lib/librte_hash.so.24.1 00:04:00.443 [263/268] Linking target lib/librte_ethdev.so.24.1 00:04:00.443 [264/268] Linking target lib/librte_security.so.24.1 00:04:00.443 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:00.443 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:00.703 [267/268] Linking target lib/librte_power.so.24.1 00:04:00.703 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:00.703 INFO: autodetecting backend as ninja 00:04:00.703 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 96 00:04:01.642 CC lib/ut_mock/mock.o 00:04:01.642 CC lib/log/log.o 00:04:01.642 CC lib/log/log_flags.o 00:04:01.642 CC lib/ut/ut.o 00:04:01.642 CC lib/log/log_deprecated.o 00:04:01.902 LIB libspdk_log.a 00:04:01.902 LIB libspdk_ut.a 00:04:01.902 LIB libspdk_ut_mock.a 00:04:01.902 SO libspdk_log.so.7.0 00:04:01.902 SO libspdk_ut.so.2.0 00:04:01.902 SO libspdk_ut_mock.so.6.0 00:04:01.902 SYMLINK libspdk_ut.so 00:04:01.902 SYMLINK libspdk_ut_mock.so 00:04:01.902 SYMLINK libspdk_log.so 00:04:02.185 CC lib/dma/dma.o 00:04:02.185 CXX lib/trace_parser/trace.o 00:04:02.185 CC lib/util/base64.o 00:04:02.185 CC lib/util/bit_array.o 00:04:02.185 CC lib/util/cpuset.o 00:04:02.185 CC lib/util/crc16.o 00:04:02.185 CC lib/ioat/ioat.o 00:04:02.185 CC lib/util/crc32.o 00:04:02.185 CC lib/util/crc32_ieee.o 00:04:02.185 CC lib/util/crc32c.o 00:04:02.185 CC lib/util/crc64.o 00:04:02.185 CC lib/util/dif.o 00:04:02.185 CC lib/util/fd.o 00:04:02.185 CC lib/util/fd_group.o 00:04:02.185 CC lib/util/file.o 00:04:02.185 CC lib/util/hexlify.o 00:04:02.185 CC lib/util/iov.o 00:04:02.185 CC lib/util/math.o 00:04:02.185 CC lib/util/net.o 00:04:02.185 CC lib/util/pipe.o 00:04:02.185 CC lib/util/strerror_tls.o 00:04:02.185 CC lib/util/uuid.o 00:04:02.185 CC lib/util/string.o 00:04:02.185 CC lib/util/xor.o 00:04:02.185 CC lib/util/zipf.o 00:04:02.444 CC lib/vfio_user/host/vfio_user_pci.o 00:04:02.444 CC lib/vfio_user/host/vfio_user.o 00:04:02.444 LIB libspdk_dma.a 00:04:02.444 SO libspdk_dma.so.4.0 00:04:02.444 LIB libspdk_ioat.a 00:04:02.444 SYMLINK libspdk_dma.so 00:04:02.444 SO libspdk_ioat.so.7.0 00:04:02.703 SYMLINK libspdk_ioat.so 00:04:02.703 LIB libspdk_vfio_user.a 00:04:02.703 SO libspdk_vfio_user.so.5.0 00:04:02.703 LIB libspdk_util.a 00:04:02.703 SYMLINK libspdk_vfio_user.so 00:04:02.703 SO libspdk_util.so.10.0 00:04:02.962 SYMLINK libspdk_util.so 00:04:02.962 LIB libspdk_trace_parser.a 00:04:02.962 SO libspdk_trace_parser.so.5.0 00:04:02.962 SYMLINK libspdk_trace_parser.so 00:04:03.220 CC lib/conf/conf.o 00:04:03.220 CC lib/json/json_parse.o 00:04:03.220 CC lib/json/json_util.o 00:04:03.220 CC lib/json/json_write.o 00:04:03.220 CC lib/rdma_provider/common.o 00:04:03.220 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:03.220 CC lib/vmd/vmd.o 00:04:03.220 CC lib/rdma_utils/rdma_utils.o 00:04:03.220 CC lib/vmd/led.o 00:04:03.220 CC lib/env_dpdk/env.o 00:04:03.220 CC lib/env_dpdk/memory.o 00:04:03.220 CC lib/env_dpdk/pci.o 00:04:03.220 CC lib/idxd/idxd.o 00:04:03.220 CC lib/env_dpdk/init.o 00:04:03.220 CC lib/idxd/idxd_user.o 00:04:03.220 CC lib/env_dpdk/threads.o 00:04:03.220 CC lib/idxd/idxd_kernel.o 00:04:03.220 CC lib/env_dpdk/pci_ioat.o 00:04:03.220 CC lib/env_dpdk/pci_virtio.o 00:04:03.220 CC lib/env_dpdk/pci_vmd.o 00:04:03.220 CC lib/env_dpdk/pci_idxd.o 00:04:03.220 CC lib/env_dpdk/pci_event.o 00:04:03.220 CC lib/env_dpdk/sigbus_handler.o 00:04:03.220 CC lib/env_dpdk/pci_dpdk.o 00:04:03.220 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:03.220 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:03.479 LIB libspdk_rdma_provider.a 00:04:03.479 LIB libspdk_conf.a 00:04:03.479 SO libspdk_rdma_provider.so.6.0 00:04:03.479 SO libspdk_conf.so.6.0 00:04:03.479 LIB libspdk_rdma_utils.a 00:04:03.479 LIB libspdk_json.a 00:04:03.479 SYMLINK libspdk_rdma_provider.so 00:04:03.479 SO libspdk_rdma_utils.so.1.0 00:04:03.479 SYMLINK libspdk_conf.so 00:04:03.479 SO libspdk_json.so.6.0 00:04:03.479 SYMLINK libspdk_rdma_utils.so 00:04:03.479 SYMLINK libspdk_json.so 00:04:03.739 LIB libspdk_idxd.a 00:04:03.739 SO libspdk_idxd.so.12.0 00:04:03.739 LIB libspdk_vmd.a 00:04:03.739 SYMLINK libspdk_idxd.so 00:04:03.739 SO libspdk_vmd.so.6.0 00:04:03.739 SYMLINK libspdk_vmd.so 00:04:03.739 CC lib/jsonrpc/jsonrpc_server.o 00:04:03.739 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:03.739 CC lib/jsonrpc/jsonrpc_client.o 00:04:03.739 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:03.998 LIB libspdk_jsonrpc.a 00:04:03.999 SO libspdk_jsonrpc.so.6.0 00:04:04.258 SYMLINK libspdk_jsonrpc.so 00:04:04.258 LIB libspdk_env_dpdk.a 00:04:04.258 SO libspdk_env_dpdk.so.15.0 00:04:04.258 SYMLINK libspdk_env_dpdk.so 00:04:04.517 CC lib/rpc/rpc.o 00:04:04.777 LIB libspdk_rpc.a 00:04:04.777 SO libspdk_rpc.so.6.0 00:04:04.777 SYMLINK libspdk_rpc.so 00:04:05.036 CC lib/keyring/keyring.o 00:04:05.036 CC lib/notify/notify.o 00:04:05.036 CC lib/keyring/keyring_rpc.o 00:04:05.036 CC lib/notify/notify_rpc.o 00:04:05.036 CC lib/trace/trace.o 00:04:05.036 CC lib/trace/trace_flags.o 00:04:05.036 CC lib/trace/trace_rpc.o 00:04:05.296 LIB libspdk_notify.a 00:04:05.296 SO libspdk_notify.so.6.0 00:04:05.296 LIB libspdk_keyring.a 00:04:05.296 LIB libspdk_trace.a 00:04:05.296 SO libspdk_keyring.so.1.0 00:04:05.296 SYMLINK libspdk_notify.so 00:04:05.296 SO libspdk_trace.so.10.0 00:04:05.296 SYMLINK libspdk_keyring.so 00:04:05.296 SYMLINK libspdk_trace.so 00:04:05.556 CC lib/sock/sock.o 00:04:05.556 CC lib/sock/sock_rpc.o 00:04:05.556 CC lib/thread/thread.o 00:04:05.556 CC lib/thread/iobuf.o 00:04:06.124 LIB libspdk_sock.a 00:04:06.124 SO libspdk_sock.so.10.0 00:04:06.124 SYMLINK libspdk_sock.so 00:04:06.383 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:06.383 CC lib/nvme/nvme_ctrlr.o 00:04:06.383 CC lib/nvme/nvme_fabric.o 00:04:06.383 CC lib/nvme/nvme_ns_cmd.o 00:04:06.383 CC lib/nvme/nvme_ns.o 00:04:06.383 CC lib/nvme/nvme_pcie_common.o 00:04:06.383 CC lib/nvme/nvme_pcie.o 00:04:06.383 CC lib/nvme/nvme_qpair.o 00:04:06.383 CC lib/nvme/nvme.o 00:04:06.383 CC lib/nvme/nvme_quirks.o 00:04:06.383 CC lib/nvme/nvme_transport.o 00:04:06.383 CC lib/nvme/nvme_discovery.o 00:04:06.383 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:06.383 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:06.383 CC lib/nvme/nvme_tcp.o 00:04:06.383 CC lib/nvme/nvme_opal.o 00:04:06.383 CC lib/nvme/nvme_io_msg.o 00:04:06.383 CC lib/nvme/nvme_poll_group.o 00:04:06.383 CC lib/nvme/nvme_zns.o 00:04:06.383 CC lib/nvme/nvme_stubs.o 00:04:06.383 CC lib/nvme/nvme_auth.o 00:04:06.383 CC lib/nvme/nvme_cuse.o 00:04:06.383 CC lib/nvme/nvme_rdma.o 00:04:06.642 LIB libspdk_thread.a 00:04:06.901 SO libspdk_thread.so.10.1 00:04:06.902 SYMLINK libspdk_thread.so 00:04:07.160 CC lib/blob/blobstore.o 00:04:07.160 CC lib/blob/request.o 00:04:07.160 CC lib/blob/zeroes.o 00:04:07.160 CC lib/blob/blob_bs_dev.o 00:04:07.160 CC lib/init/json_config.o 00:04:07.160 CC lib/init/subsystem_rpc.o 00:04:07.160 CC lib/init/rpc.o 00:04:07.160 CC lib/init/subsystem.o 00:04:07.160 CC lib/accel/accel.o 00:04:07.160 CC lib/accel/accel_rpc.o 00:04:07.160 CC lib/accel/accel_sw.o 00:04:07.160 CC lib/virtio/virtio.o 00:04:07.160 CC lib/virtio/virtio_vhost_user.o 00:04:07.160 CC lib/virtio/virtio_vfio_user.o 00:04:07.160 CC lib/virtio/virtio_pci.o 00:04:07.419 LIB libspdk_init.a 00:04:07.419 SO libspdk_init.so.5.0 00:04:07.419 LIB libspdk_virtio.a 00:04:07.419 SYMLINK libspdk_init.so 00:04:07.419 SO libspdk_virtio.so.7.0 00:04:07.419 SYMLINK libspdk_virtio.so 00:04:07.678 CC lib/event/app.o 00:04:07.678 CC lib/event/reactor.o 00:04:07.678 CC lib/event/log_rpc.o 00:04:07.678 CC lib/event/app_rpc.o 00:04:07.678 CC lib/event/scheduler_static.o 00:04:07.938 LIB libspdk_accel.a 00:04:07.938 SO libspdk_accel.so.16.0 00:04:07.938 LIB libspdk_nvme.a 00:04:07.938 SYMLINK libspdk_accel.so 00:04:07.938 LIB libspdk_event.a 00:04:07.938 SO libspdk_nvme.so.13.1 00:04:08.196 SO libspdk_event.so.14.0 00:04:08.196 SYMLINK libspdk_event.so 00:04:08.196 SYMLINK libspdk_nvme.so 00:04:08.196 CC lib/bdev/bdev.o 00:04:08.196 CC lib/bdev/bdev_rpc.o 00:04:08.196 CC lib/bdev/bdev_zone.o 00:04:08.196 CC lib/bdev/part.o 00:04:08.196 CC lib/bdev/scsi_nvme.o 00:04:09.131 LIB libspdk_blob.a 00:04:09.131 SO libspdk_blob.so.11.0 00:04:09.390 SYMLINK libspdk_blob.so 00:04:09.649 CC lib/blobfs/blobfs.o 00:04:09.649 CC lib/lvol/lvol.o 00:04:09.649 CC lib/blobfs/tree.o 00:04:10.216 LIB libspdk_bdev.a 00:04:10.216 SO libspdk_bdev.so.16.0 00:04:10.216 LIB libspdk_blobfs.a 00:04:10.216 SYMLINK libspdk_bdev.so 00:04:10.216 SO libspdk_blobfs.so.10.0 00:04:10.216 LIB libspdk_lvol.a 00:04:10.216 SYMLINK libspdk_blobfs.so 00:04:10.216 SO libspdk_lvol.so.10.0 00:04:10.476 SYMLINK libspdk_lvol.so 00:04:10.476 CC lib/nbd/nbd.o 00:04:10.476 CC lib/nbd/nbd_rpc.o 00:04:10.476 CC lib/ftl/ftl_core.o 00:04:10.476 CC lib/scsi/dev.o 00:04:10.476 CC lib/ftl/ftl_init.o 00:04:10.476 CC lib/nvmf/ctrlr.o 00:04:10.476 CC lib/scsi/lun.o 00:04:10.476 CC lib/ftl/ftl_layout.o 00:04:10.476 CC lib/nvmf/ctrlr_discovery.o 00:04:10.476 CC lib/scsi/port.o 00:04:10.476 CC lib/ftl/ftl_debug.o 00:04:10.476 CC lib/scsi/scsi.o 00:04:10.476 CC lib/ftl/ftl_sb.o 00:04:10.476 CC lib/nvmf/ctrlr_bdev.o 00:04:10.476 CC lib/ftl/ftl_io.o 00:04:10.476 CC lib/scsi/scsi_bdev.o 00:04:10.476 CC lib/nvmf/subsystem.o 00:04:10.476 CC lib/scsi/scsi_pr.o 00:04:10.476 CC lib/ublk/ublk.o 00:04:10.476 CC lib/ftl/ftl_l2p.o 00:04:10.476 CC lib/nvmf/nvmf.o 00:04:10.476 CC lib/ftl/ftl_l2p_flat.o 00:04:10.476 CC lib/nvmf/nvmf_rpc.o 00:04:10.476 CC lib/ftl/ftl_nv_cache.o 00:04:10.476 CC lib/ublk/ublk_rpc.o 00:04:10.476 CC lib/scsi/scsi_rpc.o 00:04:10.476 CC lib/scsi/task.o 00:04:10.476 CC lib/nvmf/tcp.o 00:04:10.476 CC lib/nvmf/transport.o 00:04:10.476 CC lib/nvmf/stubs.o 00:04:10.476 CC lib/ftl/ftl_band.o 00:04:10.476 CC lib/ftl/ftl_band_ops.o 00:04:10.476 CC lib/nvmf/mdns_server.o 00:04:10.476 CC lib/nvmf/rdma.o 00:04:10.476 CC lib/ftl/ftl_writer.o 00:04:10.476 CC lib/nvmf/auth.o 00:04:10.476 CC lib/ftl/ftl_rq.o 00:04:10.476 CC lib/ftl/ftl_reloc.o 00:04:10.476 CC lib/ftl/ftl_l2p_cache.o 00:04:10.476 CC lib/ftl/ftl_p2l.o 00:04:10.476 CC lib/ftl/mngt/ftl_mngt.o 00:04:10.476 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:10.476 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:10.476 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:10.476 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:10.476 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:10.477 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:10.477 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:10.477 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:10.477 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:10.477 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:10.477 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:10.477 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:10.477 CC lib/ftl/utils/ftl_conf.o 00:04:10.477 CC lib/ftl/utils/ftl_md.o 00:04:10.477 CC lib/ftl/utils/ftl_mempool.o 00:04:10.477 CC lib/ftl/utils/ftl_bitmap.o 00:04:10.477 CC lib/ftl/utils/ftl_property.o 00:04:10.477 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:10.477 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:10.477 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:10.477 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:10.477 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:10.477 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:10.477 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:10.477 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:10.477 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:10.477 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:10.477 CC lib/ftl/base/ftl_base_dev.o 00:04:10.477 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:10.477 CC lib/ftl/ftl_trace.o 00:04:10.477 CC lib/ftl/base/ftl_base_bdev.o 00:04:11.049 LIB libspdk_scsi.a 00:04:11.049 LIB libspdk_nbd.a 00:04:11.049 SO libspdk_scsi.so.9.0 00:04:11.049 SO libspdk_nbd.so.7.0 00:04:11.309 LIB libspdk_ublk.a 00:04:11.309 SYMLINK libspdk_nbd.so 00:04:11.309 SYMLINK libspdk_scsi.so 00:04:11.309 SO libspdk_ublk.so.3.0 00:04:11.309 SYMLINK libspdk_ublk.so 00:04:11.568 CC lib/vhost/vhost_rpc.o 00:04:11.568 CC lib/vhost/vhost.o 00:04:11.568 CC lib/vhost/vhost_scsi.o 00:04:11.568 CC lib/vhost/vhost_blk.o 00:04:11.568 CC lib/vhost/rte_vhost_user.o 00:04:11.568 CC lib/iscsi/conn.o 00:04:11.568 CC lib/iscsi/init_grp.o 00:04:11.568 CC lib/iscsi/iscsi.o 00:04:11.568 CC lib/iscsi/md5.o 00:04:11.568 CC lib/iscsi/param.o 00:04:11.568 CC lib/iscsi/portal_grp.o 00:04:11.568 CC lib/iscsi/tgt_node.o 00:04:11.568 CC lib/iscsi/iscsi_subsystem.o 00:04:11.568 CC lib/iscsi/iscsi_rpc.o 00:04:11.568 CC lib/iscsi/task.o 00:04:11.568 LIB libspdk_ftl.a 00:04:11.568 SO libspdk_ftl.so.9.0 00:04:11.828 SYMLINK libspdk_ftl.so 00:04:12.395 LIB libspdk_vhost.a 00:04:12.395 SO libspdk_vhost.so.8.0 00:04:12.395 LIB libspdk_nvmf.a 00:04:12.395 SO libspdk_nvmf.so.19.0 00:04:12.395 SYMLINK libspdk_vhost.so 00:04:12.395 LIB libspdk_iscsi.a 00:04:12.654 SO libspdk_iscsi.so.8.0 00:04:12.654 SYMLINK libspdk_nvmf.so 00:04:12.654 SYMLINK libspdk_iscsi.so 00:04:13.222 CC module/env_dpdk/env_dpdk_rpc.o 00:04:13.223 CC module/accel/iaa/accel_iaa.o 00:04:13.223 CC module/accel/iaa/accel_iaa_rpc.o 00:04:13.223 LIB libspdk_env_dpdk_rpc.a 00:04:13.223 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:13.223 CC module/accel/dsa/accel_dsa.o 00:04:13.223 CC module/accel/error/accel_error.o 00:04:13.223 CC module/accel/dsa/accel_dsa_rpc.o 00:04:13.223 CC module/accel/error/accel_error_rpc.o 00:04:13.223 CC module/keyring/linux/keyring.o 00:04:13.223 CC module/keyring/linux/keyring_rpc.o 00:04:13.223 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:13.223 CC module/sock/posix/posix.o 00:04:13.223 CC module/accel/ioat/accel_ioat.o 00:04:13.223 CC module/keyring/file/keyring.o 00:04:13.223 CC module/accel/ioat/accel_ioat_rpc.o 00:04:13.223 CC module/keyring/file/keyring_rpc.o 00:04:13.223 CC module/scheduler/gscheduler/gscheduler.o 00:04:13.223 CC module/blob/bdev/blob_bdev.o 00:04:13.223 SO libspdk_env_dpdk_rpc.so.6.0 00:04:13.481 SYMLINK libspdk_env_dpdk_rpc.so 00:04:13.481 LIB libspdk_scheduler_dpdk_governor.a 00:04:13.481 LIB libspdk_keyring_linux.a 00:04:13.481 LIB libspdk_accel_error.a 00:04:13.481 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:13.481 LIB libspdk_keyring_file.a 00:04:13.481 LIB libspdk_scheduler_gscheduler.a 00:04:13.481 SO libspdk_keyring_linux.so.1.0 00:04:13.481 LIB libspdk_accel_iaa.a 00:04:13.481 SO libspdk_accel_error.so.2.0 00:04:13.482 SO libspdk_keyring_file.so.1.0 00:04:13.482 LIB libspdk_scheduler_dynamic.a 00:04:13.482 LIB libspdk_accel_ioat.a 00:04:13.482 SO libspdk_scheduler_gscheduler.so.4.0 00:04:13.482 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:13.482 SO libspdk_accel_iaa.so.3.0 00:04:13.482 SO libspdk_accel_ioat.so.6.0 00:04:13.482 SO libspdk_scheduler_dynamic.so.4.0 00:04:13.482 LIB libspdk_accel_dsa.a 00:04:13.482 SYMLINK libspdk_keyring_file.so 00:04:13.482 SYMLINK libspdk_keyring_linux.so 00:04:13.482 LIB libspdk_blob_bdev.a 00:04:13.482 SYMLINK libspdk_scheduler_gscheduler.so 00:04:13.482 SYMLINK libspdk_accel_error.so 00:04:13.482 SO libspdk_accel_dsa.so.5.0 00:04:13.482 SYMLINK libspdk_scheduler_dynamic.so 00:04:13.482 SYMLINK libspdk_accel_ioat.so 00:04:13.482 SYMLINK libspdk_accel_iaa.so 00:04:13.482 SO libspdk_blob_bdev.so.11.0 00:04:13.741 SYMLINK libspdk_accel_dsa.so 00:04:13.741 SYMLINK libspdk_blob_bdev.so 00:04:14.000 LIB libspdk_sock_posix.a 00:04:14.000 SO libspdk_sock_posix.so.6.0 00:04:14.000 SYMLINK libspdk_sock_posix.so 00:04:14.000 CC module/blobfs/bdev/blobfs_bdev.o 00:04:14.000 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:14.000 CC module/bdev/gpt/gpt.o 00:04:14.000 CC module/bdev/gpt/vbdev_gpt.o 00:04:14.000 CC module/bdev/malloc/bdev_malloc.o 00:04:14.000 CC module/bdev/error/vbdev_error.o 00:04:14.000 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:14.000 CC module/bdev/error/vbdev_error_rpc.o 00:04:14.000 CC module/bdev/aio/bdev_aio.o 00:04:14.000 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:14.000 CC module/bdev/delay/vbdev_delay.o 00:04:14.000 CC module/bdev/aio/bdev_aio_rpc.o 00:04:14.000 CC module/bdev/passthru/vbdev_passthru.o 00:04:14.000 CC module/bdev/null/bdev_null.o 00:04:14.000 CC module/bdev/lvol/vbdev_lvol.o 00:04:14.000 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:14.000 CC module/bdev/null/bdev_null_rpc.o 00:04:14.001 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:14.001 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:14.001 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:14.259 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:14.259 CC module/bdev/iscsi/bdev_iscsi.o 00:04:14.259 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:14.259 CC module/bdev/nvme/bdev_nvme.o 00:04:14.259 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:14.259 CC module/bdev/nvme/bdev_mdns_client.o 00:04:14.259 CC module/bdev/nvme/nvme_rpc.o 00:04:14.259 CC module/bdev/ftl/bdev_ftl.o 00:04:14.259 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:14.259 CC module/bdev/split/vbdev_split.o 00:04:14.259 CC module/bdev/nvme/vbdev_opal.o 00:04:14.259 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:14.259 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:14.259 CC module/bdev/raid/bdev_raid.o 00:04:14.259 CC module/bdev/split/vbdev_split_rpc.o 00:04:14.259 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:14.259 CC module/bdev/raid/bdev_raid_rpc.o 00:04:14.259 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:14.259 CC module/bdev/raid/raid0.o 00:04:14.259 CC module/bdev/raid/bdev_raid_sb.o 00:04:14.259 CC module/bdev/raid/raid1.o 00:04:14.259 CC module/bdev/raid/concat.o 00:04:14.259 LIB libspdk_blobfs_bdev.a 00:04:14.518 SO libspdk_blobfs_bdev.so.6.0 00:04:14.518 LIB libspdk_bdev_error.a 00:04:14.518 LIB libspdk_bdev_split.a 00:04:14.518 LIB libspdk_bdev_gpt.a 00:04:14.518 LIB libspdk_bdev_null.a 00:04:14.518 SO libspdk_bdev_split.so.6.0 00:04:14.518 SO libspdk_bdev_error.so.6.0 00:04:14.518 SO libspdk_bdev_null.so.6.0 00:04:14.518 SYMLINK libspdk_blobfs_bdev.so 00:04:14.518 SO libspdk_bdev_gpt.so.6.0 00:04:14.518 LIB libspdk_bdev_passthru.a 00:04:14.518 LIB libspdk_bdev_ftl.a 00:04:14.518 LIB libspdk_bdev_zone_block.a 00:04:14.518 LIB libspdk_bdev_malloc.a 00:04:14.518 LIB libspdk_bdev_aio.a 00:04:14.518 SO libspdk_bdev_passthru.so.6.0 00:04:14.518 SYMLINK libspdk_bdev_split.so 00:04:14.518 SO libspdk_bdev_ftl.so.6.0 00:04:14.518 SYMLINK libspdk_bdev_gpt.so 00:04:14.518 SO libspdk_bdev_zone_block.so.6.0 00:04:14.518 SO libspdk_bdev_malloc.so.6.0 00:04:14.518 SYMLINK libspdk_bdev_error.so 00:04:14.518 SYMLINK libspdk_bdev_null.so 00:04:14.518 LIB libspdk_bdev_delay.a 00:04:14.518 LIB libspdk_bdev_iscsi.a 00:04:14.518 SO libspdk_bdev_aio.so.6.0 00:04:14.518 SYMLINK libspdk_bdev_passthru.so 00:04:14.518 SO libspdk_bdev_delay.so.6.0 00:04:14.518 SO libspdk_bdev_iscsi.so.6.0 00:04:14.518 SYMLINK libspdk_bdev_zone_block.so 00:04:14.518 SYMLINK libspdk_bdev_malloc.so 00:04:14.518 SYMLINK libspdk_bdev_ftl.so 00:04:14.518 SYMLINK libspdk_bdev_aio.so 00:04:14.518 LIB libspdk_bdev_lvol.a 00:04:14.518 SYMLINK libspdk_bdev_delay.so 00:04:14.518 SYMLINK libspdk_bdev_iscsi.so 00:04:14.518 LIB libspdk_bdev_virtio.a 00:04:14.778 SO libspdk_bdev_lvol.so.6.0 00:04:14.778 SO libspdk_bdev_virtio.so.6.0 00:04:14.778 SYMLINK libspdk_bdev_lvol.so 00:04:14.778 SYMLINK libspdk_bdev_virtio.so 00:04:15.037 LIB libspdk_bdev_raid.a 00:04:15.037 SO libspdk_bdev_raid.so.6.0 00:04:15.037 SYMLINK libspdk_bdev_raid.so 00:04:15.977 LIB libspdk_bdev_nvme.a 00:04:15.977 SO libspdk_bdev_nvme.so.7.0 00:04:15.977 SYMLINK libspdk_bdev_nvme.so 00:04:16.545 CC module/event/subsystems/iobuf/iobuf.o 00:04:16.545 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:16.545 CC module/event/subsystems/vmd/vmd.o 00:04:16.545 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:16.545 CC module/event/subsystems/keyring/keyring.o 00:04:16.545 CC module/event/subsystems/scheduler/scheduler.o 00:04:16.545 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:16.545 CC module/event/subsystems/sock/sock.o 00:04:16.545 LIB libspdk_event_keyring.a 00:04:16.803 LIB libspdk_event_scheduler.a 00:04:16.803 LIB libspdk_event_vmd.a 00:04:16.803 LIB libspdk_event_iobuf.a 00:04:16.803 LIB libspdk_event_vhost_blk.a 00:04:16.803 SO libspdk_event_keyring.so.1.0 00:04:16.803 LIB libspdk_event_sock.a 00:04:16.803 SO libspdk_event_scheduler.so.4.0 00:04:16.803 SO libspdk_event_vmd.so.6.0 00:04:16.804 SO libspdk_event_iobuf.so.3.0 00:04:16.804 SO libspdk_event_vhost_blk.so.3.0 00:04:16.804 SO libspdk_event_sock.so.5.0 00:04:16.804 SYMLINK libspdk_event_keyring.so 00:04:16.804 SYMLINK libspdk_event_vmd.so 00:04:16.804 SYMLINK libspdk_event_scheduler.so 00:04:16.804 SYMLINK libspdk_event_vhost_blk.so 00:04:16.804 SYMLINK libspdk_event_iobuf.so 00:04:16.804 SYMLINK libspdk_event_sock.so 00:04:17.062 CC module/event/subsystems/accel/accel.o 00:04:17.321 LIB libspdk_event_accel.a 00:04:17.321 SO libspdk_event_accel.so.6.0 00:04:17.321 SYMLINK libspdk_event_accel.so 00:04:17.579 CC module/event/subsystems/bdev/bdev.o 00:04:17.839 LIB libspdk_event_bdev.a 00:04:17.839 SO libspdk_event_bdev.so.6.0 00:04:17.839 SYMLINK libspdk_event_bdev.so 00:04:18.098 CC module/event/subsystems/ublk/ublk.o 00:04:18.098 CC module/event/subsystems/nbd/nbd.o 00:04:18.098 CC module/event/subsystems/scsi/scsi.o 00:04:18.098 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:18.098 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:18.357 LIB libspdk_event_ublk.a 00:04:18.357 LIB libspdk_event_nbd.a 00:04:18.357 LIB libspdk_event_scsi.a 00:04:18.357 SO libspdk_event_ublk.so.3.0 00:04:18.357 SO libspdk_event_nbd.so.6.0 00:04:18.357 SO libspdk_event_scsi.so.6.0 00:04:18.357 LIB libspdk_event_nvmf.a 00:04:18.357 SYMLINK libspdk_event_ublk.so 00:04:18.357 SYMLINK libspdk_event_nbd.so 00:04:18.357 SO libspdk_event_nvmf.so.6.0 00:04:18.357 SYMLINK libspdk_event_scsi.so 00:04:18.357 SYMLINK libspdk_event_nvmf.so 00:04:18.617 CC module/event/subsystems/iscsi/iscsi.o 00:04:18.617 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:18.877 LIB libspdk_event_vhost_scsi.a 00:04:18.877 LIB libspdk_event_iscsi.a 00:04:18.877 SO libspdk_event_vhost_scsi.so.3.0 00:04:18.877 SO libspdk_event_iscsi.so.6.0 00:04:18.877 SYMLINK libspdk_event_vhost_scsi.so 00:04:18.877 SYMLINK libspdk_event_iscsi.so 00:04:19.136 SO libspdk.so.6.0 00:04:19.137 SYMLINK libspdk.so 00:04:19.395 CXX app/trace/trace.o 00:04:19.395 CC app/spdk_nvme_perf/perf.o 00:04:19.395 CC app/spdk_nvme_identify/identify.o 00:04:19.395 TEST_HEADER include/spdk/accel_module.h 00:04:19.395 TEST_HEADER include/spdk/accel.h 00:04:19.395 TEST_HEADER include/spdk/assert.h 00:04:19.395 TEST_HEADER include/spdk/barrier.h 00:04:19.395 TEST_HEADER include/spdk/base64.h 00:04:19.395 CC app/spdk_top/spdk_top.o 00:04:19.395 TEST_HEADER include/spdk/bdev.h 00:04:19.395 TEST_HEADER include/spdk/bdev_zone.h 00:04:19.395 TEST_HEADER include/spdk/bdev_module.h 00:04:19.395 CC test/rpc_client/rpc_client_test.o 00:04:19.395 CC app/spdk_lspci/spdk_lspci.o 00:04:19.395 TEST_HEADER include/spdk/bit_array.h 00:04:19.395 CC app/trace_record/trace_record.o 00:04:19.395 TEST_HEADER include/spdk/blob_bdev.h 00:04:19.395 TEST_HEADER include/spdk/bit_pool.h 00:04:19.395 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:19.395 TEST_HEADER include/spdk/conf.h 00:04:19.395 CC app/spdk_nvme_discover/discovery_aer.o 00:04:19.395 TEST_HEADER include/spdk/blobfs.h 00:04:19.395 TEST_HEADER include/spdk/blob.h 00:04:19.395 TEST_HEADER include/spdk/crc16.h 00:04:19.395 TEST_HEADER include/spdk/config.h 00:04:19.395 TEST_HEADER include/spdk/cpuset.h 00:04:19.395 TEST_HEADER include/spdk/crc64.h 00:04:19.395 TEST_HEADER include/spdk/crc32.h 00:04:19.395 TEST_HEADER include/spdk/dma.h 00:04:19.395 TEST_HEADER include/spdk/dif.h 00:04:19.395 TEST_HEADER include/spdk/env_dpdk.h 00:04:19.395 TEST_HEADER include/spdk/endian.h 00:04:19.395 TEST_HEADER include/spdk/env.h 00:04:19.395 TEST_HEADER include/spdk/fd_group.h 00:04:19.395 TEST_HEADER include/spdk/file.h 00:04:19.395 TEST_HEADER include/spdk/event.h 00:04:19.395 CC app/nvmf_tgt/nvmf_main.o 00:04:19.395 TEST_HEADER include/spdk/fd.h 00:04:19.395 CC app/spdk_dd/spdk_dd.o 00:04:19.395 TEST_HEADER include/spdk/gpt_spec.h 00:04:19.395 TEST_HEADER include/spdk/ftl.h 00:04:19.395 TEST_HEADER include/spdk/histogram_data.h 00:04:19.395 TEST_HEADER include/spdk/hexlify.h 00:04:19.395 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:19.395 TEST_HEADER include/spdk/idxd.h 00:04:19.395 TEST_HEADER include/spdk/idxd_spec.h 00:04:19.395 TEST_HEADER include/spdk/init.h 00:04:19.395 TEST_HEADER include/spdk/ioat.h 00:04:19.395 TEST_HEADER include/spdk/json.h 00:04:19.395 TEST_HEADER include/spdk/ioat_spec.h 00:04:19.395 TEST_HEADER include/spdk/iscsi_spec.h 00:04:19.395 TEST_HEADER include/spdk/keyring.h 00:04:19.395 TEST_HEADER include/spdk/keyring_module.h 00:04:19.395 TEST_HEADER include/spdk/jsonrpc.h 00:04:19.395 TEST_HEADER include/spdk/likely.h 00:04:19.395 TEST_HEADER include/spdk/log.h 00:04:19.395 TEST_HEADER include/spdk/memory.h 00:04:19.663 TEST_HEADER include/spdk/lvol.h 00:04:19.663 TEST_HEADER include/spdk/nbd.h 00:04:19.663 TEST_HEADER include/spdk/mmio.h 00:04:19.663 TEST_HEADER include/spdk/net.h 00:04:19.663 TEST_HEADER include/spdk/nvme.h 00:04:19.663 TEST_HEADER include/spdk/notify.h 00:04:19.663 TEST_HEADER include/spdk/nvme_intel.h 00:04:19.663 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:19.663 CC app/iscsi_tgt/iscsi_tgt.o 00:04:19.663 TEST_HEADER include/spdk/nvme_spec.h 00:04:19.663 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:19.663 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:19.663 TEST_HEADER include/spdk/nvme_zns.h 00:04:19.663 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:19.663 TEST_HEADER include/spdk/nvmf.h 00:04:19.663 TEST_HEADER include/spdk/opal_spec.h 00:04:19.663 TEST_HEADER include/spdk/nvmf_transport.h 00:04:19.663 TEST_HEADER include/spdk/nvmf_spec.h 00:04:19.663 TEST_HEADER include/spdk/pci_ids.h 00:04:19.663 TEST_HEADER include/spdk/opal.h 00:04:19.663 TEST_HEADER include/spdk/queue.h 00:04:19.663 TEST_HEADER include/spdk/pipe.h 00:04:19.663 TEST_HEADER include/spdk/reduce.h 00:04:19.663 TEST_HEADER include/spdk/rpc.h 00:04:19.663 TEST_HEADER include/spdk/scheduler.h 00:04:19.663 TEST_HEADER include/spdk/scsi.h 00:04:19.663 TEST_HEADER include/spdk/scsi_spec.h 00:04:19.663 TEST_HEADER include/spdk/sock.h 00:04:19.663 TEST_HEADER include/spdk/string.h 00:04:19.663 TEST_HEADER include/spdk/thread.h 00:04:19.663 TEST_HEADER include/spdk/stdinc.h 00:04:19.664 TEST_HEADER include/spdk/trace_parser.h 00:04:19.664 TEST_HEADER include/spdk/trace.h 00:04:19.664 TEST_HEADER include/spdk/tree.h 00:04:19.664 TEST_HEADER include/spdk/ublk.h 00:04:19.664 TEST_HEADER include/spdk/uuid.h 00:04:19.664 TEST_HEADER include/spdk/version.h 00:04:19.664 TEST_HEADER include/spdk/util.h 00:04:19.664 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:19.664 CC app/spdk_tgt/spdk_tgt.o 00:04:19.664 TEST_HEADER include/spdk/vhost.h 00:04:19.664 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:19.664 TEST_HEADER include/spdk/vmd.h 00:04:19.664 TEST_HEADER include/spdk/xor.h 00:04:19.664 TEST_HEADER include/spdk/zipf.h 00:04:19.664 CXX test/cpp_headers/accel.o 00:04:19.664 CXX test/cpp_headers/assert.o 00:04:19.664 CXX test/cpp_headers/accel_module.o 00:04:19.664 CXX test/cpp_headers/barrier.o 00:04:19.664 CXX test/cpp_headers/base64.o 00:04:19.664 CXX test/cpp_headers/bdev.o 00:04:19.664 CXX test/cpp_headers/bdev_zone.o 00:04:19.664 CXX test/cpp_headers/bdev_module.o 00:04:19.664 CXX test/cpp_headers/bit_array.o 00:04:19.664 CXX test/cpp_headers/blob_bdev.o 00:04:19.664 CXX test/cpp_headers/bit_pool.o 00:04:19.664 CXX test/cpp_headers/blobfs_bdev.o 00:04:19.664 CXX test/cpp_headers/blob.o 00:04:19.664 CXX test/cpp_headers/config.o 00:04:19.664 CXX test/cpp_headers/blobfs.o 00:04:19.664 CXX test/cpp_headers/conf.o 00:04:19.664 CXX test/cpp_headers/crc16.o 00:04:19.664 CXX test/cpp_headers/cpuset.o 00:04:19.664 CXX test/cpp_headers/crc32.o 00:04:19.664 CXX test/cpp_headers/dif.o 00:04:19.664 CXX test/cpp_headers/endian.o 00:04:19.664 CXX test/cpp_headers/crc64.o 00:04:19.664 CXX test/cpp_headers/dma.o 00:04:19.664 CXX test/cpp_headers/env.o 00:04:19.664 CXX test/cpp_headers/event.o 00:04:19.664 CXX test/cpp_headers/env_dpdk.o 00:04:19.664 CXX test/cpp_headers/fd.o 00:04:19.664 CXX test/cpp_headers/file.o 00:04:19.664 CXX test/cpp_headers/gpt_spec.o 00:04:19.664 CXX test/cpp_headers/fd_group.o 00:04:19.664 CXX test/cpp_headers/ftl.o 00:04:19.664 CXX test/cpp_headers/hexlify.o 00:04:19.664 CXX test/cpp_headers/idxd.o 00:04:19.664 CXX test/cpp_headers/histogram_data.o 00:04:19.664 CXX test/cpp_headers/ioat.o 00:04:19.664 CXX test/cpp_headers/idxd_spec.o 00:04:19.664 CXX test/cpp_headers/init.o 00:04:19.664 CXX test/cpp_headers/ioat_spec.o 00:04:19.664 CXX test/cpp_headers/json.o 00:04:19.664 CXX test/cpp_headers/iscsi_spec.o 00:04:19.664 CXX test/cpp_headers/jsonrpc.o 00:04:19.664 CXX test/cpp_headers/keyring.o 00:04:19.664 CXX test/cpp_headers/keyring_module.o 00:04:19.664 CXX test/cpp_headers/likely.o 00:04:19.664 CXX test/cpp_headers/log.o 00:04:19.664 CXX test/cpp_headers/memory.o 00:04:19.664 CXX test/cpp_headers/lvol.o 00:04:19.664 CXX test/cpp_headers/nbd.o 00:04:19.664 CXX test/cpp_headers/net.o 00:04:19.664 CXX test/cpp_headers/mmio.o 00:04:19.664 CXX test/cpp_headers/notify.o 00:04:19.664 CXX test/cpp_headers/nvme.o 00:04:19.664 CXX test/cpp_headers/nvme_intel.o 00:04:19.664 CXX test/cpp_headers/nvme_ocssd.o 00:04:19.664 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:19.664 CXX test/cpp_headers/nvme_spec.o 00:04:19.664 CXX test/cpp_headers/nvmf_cmd.o 00:04:19.664 CXX test/cpp_headers/nvme_zns.o 00:04:19.664 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:19.664 CXX test/cpp_headers/nvmf_spec.o 00:04:19.664 CXX test/cpp_headers/nvmf.o 00:04:19.664 CXX test/cpp_headers/nvmf_transport.o 00:04:19.664 CXX test/cpp_headers/opal.o 00:04:19.664 CXX test/cpp_headers/opal_spec.o 00:04:19.664 CXX test/cpp_headers/pci_ids.o 00:04:19.664 CXX test/cpp_headers/pipe.o 00:04:19.664 CXX test/cpp_headers/queue.o 00:04:19.664 CC examples/ioat/verify/verify.o 00:04:19.664 CC examples/ioat/perf/perf.o 00:04:19.664 CC examples/util/zipf/zipf.o 00:04:19.664 CC test/thread/poller_perf/poller_perf.o 00:04:19.664 CC test/env/vtophys/vtophys.o 00:04:19.664 CXX test/cpp_headers/reduce.o 00:04:19.664 CC test/env/memory/memory_ut.o 00:04:19.664 CC test/env/pci/pci_ut.o 00:04:19.664 CC app/fio/nvme/fio_plugin.o 00:04:19.664 CC test/app/jsoncat/jsoncat.o 00:04:19.664 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:19.664 CC test/app/histogram_perf/histogram_perf.o 00:04:19.664 CC test/app/stub/stub.o 00:04:19.664 CC test/app/bdev_svc/bdev_svc.o 00:04:19.664 LINK spdk_lspci 00:04:19.664 CXX test/cpp_headers/rpc.o 00:04:19.930 CC test/dma/test_dma/test_dma.o 00:04:19.930 CC app/fio/bdev/fio_plugin.o 00:04:19.930 LINK interrupt_tgt 00:04:19.930 LINK rpc_client_test 00:04:19.930 LINK nvmf_tgt 00:04:19.930 LINK iscsi_tgt 00:04:20.190 LINK spdk_tgt 00:04:20.190 LINK spdk_nvme_discover 00:04:20.190 CC test/env/mem_callbacks/mem_callbacks.o 00:04:20.190 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:20.190 LINK vtophys 00:04:20.190 LINK poller_perf 00:04:20.190 LINK jsoncat 00:04:20.190 LINK zipf 00:04:20.190 CXX test/cpp_headers/scheduler.o 00:04:20.190 CXX test/cpp_headers/scsi.o 00:04:20.190 CXX test/cpp_headers/scsi_spec.o 00:04:20.190 CXX test/cpp_headers/sock.o 00:04:20.190 CXX test/cpp_headers/stdinc.o 00:04:20.190 CXX test/cpp_headers/string.o 00:04:20.190 CXX test/cpp_headers/thread.o 00:04:20.190 CXX test/cpp_headers/trace.o 00:04:20.190 CXX test/cpp_headers/trace_parser.o 00:04:20.190 CXX test/cpp_headers/ublk.o 00:04:20.190 CXX test/cpp_headers/util.o 00:04:20.190 CXX test/cpp_headers/tree.o 00:04:20.190 CXX test/cpp_headers/uuid.o 00:04:20.190 LINK spdk_trace_record 00:04:20.190 CXX test/cpp_headers/version.o 00:04:20.190 LINK verify 00:04:20.190 CXX test/cpp_headers/vfio_user_pci.o 00:04:20.190 CXX test/cpp_headers/vfio_user_spec.o 00:04:20.190 CXX test/cpp_headers/vhost.o 00:04:20.190 CXX test/cpp_headers/vmd.o 00:04:20.190 LINK bdev_svc 00:04:20.190 CXX test/cpp_headers/xor.o 00:04:20.190 CXX test/cpp_headers/zipf.o 00:04:20.190 LINK histogram_perf 00:04:20.190 LINK spdk_dd 00:04:20.190 LINK env_dpdk_post_init 00:04:20.449 LINK stub 00:04:20.450 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:20.450 LINK ioat_perf 00:04:20.450 LINK spdk_trace 00:04:20.450 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:20.450 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:20.450 LINK pci_ut 00:04:20.450 LINK test_dma 00:04:20.708 LINK nvme_fuzz 00:04:20.708 LINK spdk_nvme 00:04:20.708 CC test/event/event_perf/event_perf.o 00:04:20.708 CC test/event/reactor/reactor.o 00:04:20.708 CC examples/vmd/lsvmd/lsvmd.o 00:04:20.708 LINK spdk_bdev 00:04:20.708 CC examples/vmd/led/led.o 00:04:20.708 CC test/event/reactor_perf/reactor_perf.o 00:04:20.708 CC examples/idxd/perf/perf.o 00:04:20.708 CC app/vhost/vhost.o 00:04:20.708 CC examples/sock/hello_world/hello_sock.o 00:04:20.708 CC test/event/app_repeat/app_repeat.o 00:04:20.708 CC test/event/scheduler/scheduler.o 00:04:20.708 LINK vhost_fuzz 00:04:20.708 LINK spdk_nvme_identify 00:04:20.708 CC examples/thread/thread/thread_ex.o 00:04:20.708 LINK spdk_nvme_perf 00:04:20.708 LINK mem_callbacks 00:04:20.708 LINK event_perf 00:04:20.708 LINK reactor_perf 00:04:20.708 LINK reactor 00:04:20.709 LINK lsvmd 00:04:20.709 LINK spdk_top 00:04:20.709 LINK led 00:04:20.967 LINK app_repeat 00:04:20.967 LINK vhost 00:04:20.967 LINK hello_sock 00:04:20.967 LINK scheduler 00:04:20.967 LINK idxd_perf 00:04:20.967 LINK thread 00:04:20.967 CC test/nvme/boot_partition/boot_partition.o 00:04:20.967 CC test/nvme/err_injection/err_injection.o 00:04:20.967 CC test/nvme/overhead/overhead.o 00:04:20.967 CC test/nvme/startup/startup.o 00:04:20.967 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:20.967 CC test/nvme/e2edp/nvme_dp.o 00:04:20.967 CC test/nvme/cuse/cuse.o 00:04:20.967 CC test/nvme/reset/reset.o 00:04:20.967 CC test/nvme/aer/aer.o 00:04:20.967 CC test/nvme/connect_stress/connect_stress.o 00:04:20.967 CC test/nvme/reserve/reserve.o 00:04:20.967 CC test/nvme/compliance/nvme_compliance.o 00:04:20.967 CC test/nvme/fdp/fdp.o 00:04:20.967 CC test/nvme/sgl/sgl.o 00:04:20.967 CC test/nvme/simple_copy/simple_copy.o 00:04:20.967 CC test/nvme/fused_ordering/fused_ordering.o 00:04:20.967 CC test/accel/dif/dif.o 00:04:20.967 CC test/blobfs/mkfs/mkfs.o 00:04:20.967 LINK memory_ut 00:04:21.225 CC test/lvol/esnap/esnap.o 00:04:21.225 LINK boot_partition 00:04:21.225 LINK startup 00:04:21.225 LINK doorbell_aers 00:04:21.225 LINK err_injection 00:04:21.225 LINK connect_stress 00:04:21.225 LINK fused_ordering 00:04:21.225 LINK reserve 00:04:21.225 LINK overhead 00:04:21.225 LINK simple_copy 00:04:21.225 LINK reset 00:04:21.225 LINK mkfs 00:04:21.225 LINK sgl 00:04:21.225 LINK nvme_dp 00:04:21.225 LINK aer 00:04:21.225 LINK nvme_compliance 00:04:21.225 LINK fdp 00:04:21.225 CC examples/nvme/reconnect/reconnect.o 00:04:21.225 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:21.225 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:21.225 CC examples/nvme/hotplug/hotplug.o 00:04:21.484 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:21.484 CC examples/nvme/arbitration/arbitration.o 00:04:21.484 CC examples/nvme/hello_world/hello_world.o 00:04:21.484 CC examples/nvme/abort/abort.o 00:04:21.484 LINK dif 00:04:21.484 CC examples/accel/perf/accel_perf.o 00:04:21.484 CC examples/blob/cli/blobcli.o 00:04:21.484 CC examples/blob/hello_world/hello_blob.o 00:04:21.484 LINK pmr_persistence 00:04:21.484 LINK cmb_copy 00:04:21.484 LINK hello_world 00:04:21.484 LINK hotplug 00:04:21.742 LINK reconnect 00:04:21.742 LINK arbitration 00:04:21.742 LINK iscsi_fuzz 00:04:21.742 LINK abort 00:04:21.742 LINK hello_blob 00:04:21.742 LINK nvme_manage 00:04:21.742 LINK accel_perf 00:04:21.742 LINK blobcli 00:04:22.001 CC test/bdev/bdevio/bdevio.o 00:04:22.001 LINK cuse 00:04:22.259 LINK bdevio 00:04:22.259 CC examples/bdev/hello_world/hello_bdev.o 00:04:22.259 CC examples/bdev/bdevperf/bdevperf.o 00:04:22.518 LINK hello_bdev 00:04:22.777 LINK bdevperf 00:04:23.347 CC examples/nvmf/nvmf/nvmf.o 00:04:23.606 LINK nvmf 00:04:24.544 LINK esnap 00:04:24.804 00:04:24.804 real 0m45.381s 00:04:24.804 user 6m20.703s 00:04:24.804 sys 3m20.213s 00:04:24.804 18:55:17 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:24.804 18:55:17 make -- common/autotest_common.sh@10 -- $ set +x 00:04:24.804 ************************************ 00:04:24.804 END TEST make 00:04:24.804 ************************************ 00:04:24.804 18:55:17 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:24.804 18:55:17 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:24.804 18:55:17 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:24.804 18:55:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.804 18:55:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:24.804 18:55:17 -- pm/common@44 -- $ pid=496930 00:04:24.804 18:55:17 -- pm/common@50 -- $ kill -TERM 496930 00:04:24.804 18:55:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.804 18:55:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:24.804 18:55:17 -- pm/common@44 -- $ pid=496931 00:04:24.804 18:55:17 -- pm/common@50 -- $ kill -TERM 496931 00:04:24.804 18:55:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.804 18:55:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:24.804 18:55:17 -- pm/common@44 -- $ pid=496933 00:04:24.804 18:55:17 -- pm/common@50 -- $ kill -TERM 496933 00:04:24.804 18:55:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.804 18:55:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:24.804 18:55:17 -- pm/common@44 -- $ pid=496954 00:04:24.804 18:55:17 -- pm/common@50 -- $ sudo -E kill -TERM 496954 00:04:25.064 18:55:17 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:25.064 18:55:17 -- nvmf/common.sh@7 -- # uname -s 00:04:25.064 18:55:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:25.064 18:55:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:25.064 18:55:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:25.064 18:55:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:25.064 18:55:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:25.064 18:55:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:25.064 18:55:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:25.064 18:55:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:25.064 18:55:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:25.064 18:55:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:25.064 18:55:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:04:25.064 18:55:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:04:25.064 18:55:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:25.064 18:55:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:25.064 18:55:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:25.064 18:55:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:25.064 18:55:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:25.064 18:55:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:25.064 18:55:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:25.064 18:55:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:25.064 18:55:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.064 18:55:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.064 18:55:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.064 18:55:17 -- paths/export.sh@5 -- # export PATH 00:04:25.064 18:55:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.064 18:55:17 -- nvmf/common.sh@47 -- # : 0 00:04:25.064 18:55:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:25.064 18:55:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:25.064 18:55:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:25.065 18:55:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:25.065 18:55:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:25.065 18:55:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:25.065 18:55:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:25.065 18:55:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:25.065 18:55:17 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:25.065 18:55:17 -- spdk/autotest.sh@32 -- # uname -s 00:04:25.065 18:55:17 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:25.065 18:55:17 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:25.065 18:55:17 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:04:25.065 18:55:17 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:25.065 18:55:17 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:04:25.065 18:55:17 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:25.065 18:55:17 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:25.065 18:55:17 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:25.065 18:55:17 -- spdk/autotest.sh@48 -- # udevadm_pid=556049 00:04:25.065 18:55:17 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:25.065 18:55:17 -- pm/common@17 -- # local monitor 00:04:25.065 18:55:17 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:25.065 18:55:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:25.065 18:55:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:25.065 18:55:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:25.065 18:55:17 -- pm/common@21 -- # date +%s 00:04:25.065 18:55:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:25.065 18:55:17 -- pm/common@21 -- # date +%s 00:04:25.065 18:55:17 -- pm/common@25 -- # sleep 1 00:04:25.065 18:55:17 -- pm/common@21 -- # date +%s 00:04:25.065 18:55:17 -- pm/common@21 -- # date +%s 00:04:25.065 18:55:17 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721926517 00:04:25.065 18:55:17 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721926517 00:04:25.065 18:55:17 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721926517 00:04:25.065 18:55:17 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721926517 00:04:25.065 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721926517_collect-vmstat.pm.log 00:04:25.065 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721926517_collect-cpu-load.pm.log 00:04:25.065 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721926517_collect-cpu-temp.pm.log 00:04:25.065 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721926517_collect-bmc-pm.bmc.pm.log 00:04:26.004 18:55:18 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:26.004 18:55:18 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:26.004 18:55:18 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:26.004 18:55:18 -- common/autotest_common.sh@10 -- # set +x 00:04:26.004 18:55:18 -- spdk/autotest.sh@59 -- # create_test_list 00:04:26.004 18:55:18 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:26.004 18:55:18 -- common/autotest_common.sh@10 -- # set +x 00:04:26.004 18:55:18 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:04:26.004 18:55:18 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:26.004 18:55:18 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:26.004 18:55:18 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:04:26.004 18:55:18 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:26.005 18:55:18 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:26.005 18:55:18 -- common/autotest_common.sh@1455 -- # uname 00:04:26.265 18:55:18 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:26.265 18:55:18 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:26.265 18:55:18 -- common/autotest_common.sh@1475 -- # uname 00:04:26.265 18:55:18 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:26.265 18:55:18 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:26.265 18:55:18 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:26.265 18:55:18 -- spdk/autotest.sh@72 -- # hash lcov 00:04:26.265 18:55:18 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:26.265 18:55:18 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:26.265 --rc lcov_branch_coverage=1 00:04:26.265 --rc lcov_function_coverage=1 00:04:26.265 --rc genhtml_branch_coverage=1 00:04:26.265 --rc genhtml_function_coverage=1 00:04:26.265 --rc genhtml_legend=1 00:04:26.265 --rc geninfo_all_blocks=1 00:04:26.265 ' 00:04:26.265 18:55:18 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:26.265 --rc lcov_branch_coverage=1 00:04:26.265 --rc lcov_function_coverage=1 00:04:26.265 --rc genhtml_branch_coverage=1 00:04:26.265 --rc genhtml_function_coverage=1 00:04:26.265 --rc genhtml_legend=1 00:04:26.265 --rc geninfo_all_blocks=1 00:04:26.265 ' 00:04:26.265 18:55:18 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:26.265 --rc lcov_branch_coverage=1 00:04:26.265 --rc lcov_function_coverage=1 00:04:26.265 --rc genhtml_branch_coverage=1 00:04:26.265 --rc genhtml_function_coverage=1 00:04:26.265 --rc genhtml_legend=1 00:04:26.265 --rc geninfo_all_blocks=1 00:04:26.265 --no-external' 00:04:26.265 18:55:18 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:26.265 --rc lcov_branch_coverage=1 00:04:26.265 --rc lcov_function_coverage=1 00:04:26.265 --rc genhtml_branch_coverage=1 00:04:26.265 --rc genhtml_function_coverage=1 00:04:26.265 --rc genhtml_legend=1 00:04:26.265 --rc geninfo_all_blocks=1 00:04:26.265 --no-external' 00:04:26.265 18:55:18 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:26.265 lcov: LCOV version 1.15 00:04:26.265 18:55:18 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:04:38.479 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:38.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:46.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:46.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:46.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:46.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:46.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:46.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:46.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:46.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:46.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:46.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:46.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:46.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:46.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:46.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:46.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:46.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:46.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:46.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:46.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:46.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:46.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:46.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:46.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:46.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:46.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:46.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:46.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:46.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:46.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:46.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:46.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:46.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:46.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:46.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:46.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:46.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:46.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:46.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:46.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:46.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:46.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:46.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:46.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:46.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:46.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:46.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:46.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:46.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:46.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:46.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:46.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:46.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:46.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:46.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:46.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:46.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:46.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:46.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:46.859 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:46.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:46.859 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:46.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:46.859 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:46.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:46.859 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:46.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:46.859 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:46.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:46.859 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:46.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:46.859 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:46.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:46.859 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:46.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:46.859 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:46.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:46.859 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:46.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:46.859 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:46.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:46.859 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:46.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:46.859 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:46.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:46.859 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:46.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:46.859 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:04:46.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/net.gcno 00:04:46.860 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:46.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:46.860 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:46.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:46.860 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:46.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:46.860 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:46.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:46.860 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:46.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:46.860 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:46.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:46.860 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:46.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:46.860 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:46.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:46.860 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:46.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:46.860 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:46.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:46.860 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:46.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:46.860 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:46.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:46.860 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:46.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:46.860 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:46.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:46.860 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:46.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:46.860 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:46.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:47.120 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:47.120 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:47.120 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:47.120 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:47.120 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:47.120 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:47.120 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:47.120 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:47.120 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:47.120 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:47.120 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:47.120 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:47.120 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:47.120 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:47.120 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:47.120 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:47.120 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:47.120 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:47.120 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:47.120 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:47.120 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:47.120 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:47.120 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:47.120 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:47.120 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:47.120 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:47.120 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:47.120 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:47.120 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:47.120 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:47.120 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:47.120 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:47.120 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:47.120 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:47.120 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:47.120 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:47.120 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:47.120 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:47.120 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:47.120 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:47.120 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:47.120 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:47.120 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:47.120 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:47.120 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:47.120 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:47.120 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:47.120 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:47.120 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:47.120 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:47.120 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:47.120 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:47.120 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:47.120 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:47.120 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:47.120 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:47.120 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:47.120 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:50.419 18:55:42 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:50.419 18:55:42 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:50.419 18:55:42 -- common/autotest_common.sh@10 -- # set +x 00:04:50.419 18:55:42 -- spdk/autotest.sh@91 -- # rm -f 00:04:50.419 18:55:42 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:52.958 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:04:52.958 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:52.958 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:52.958 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:52.958 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:52.958 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:52.958 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:52.959 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:52.959 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:52.959 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:52.959 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:52.959 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:52.959 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:52.959 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:52.959 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:52.959 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:52.959 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:52.959 18:55:45 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:52.959 18:55:45 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:52.959 18:55:45 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:52.959 18:55:45 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:52.959 18:55:45 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:52.959 18:55:45 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:52.959 18:55:45 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:52.959 18:55:45 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:52.959 18:55:45 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:52.959 18:55:45 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:52.959 18:55:45 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:52.959 18:55:45 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:52.959 18:55:45 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:52.959 18:55:45 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:52.959 18:55:45 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:52.959 No valid GPT data, bailing 00:04:52.959 18:55:45 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:53.219 18:55:45 -- scripts/common.sh@391 -- # pt= 00:04:53.219 18:55:45 -- scripts/common.sh@392 -- # return 1 00:04:53.219 18:55:45 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:53.219 1+0 records in 00:04:53.219 1+0 records out 00:04:53.219 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00169243 s, 620 MB/s 00:04:53.219 18:55:45 -- spdk/autotest.sh@118 -- # sync 00:04:53.219 18:55:45 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:53.219 18:55:45 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:53.219 18:55:45 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:58.494 18:55:50 -- spdk/autotest.sh@124 -- # uname -s 00:04:58.494 18:55:50 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:58.494 18:55:50 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:04:58.494 18:55:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.494 18:55:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.494 18:55:50 -- common/autotest_common.sh@10 -- # set +x 00:04:58.494 ************************************ 00:04:58.494 START TEST setup.sh 00:04:58.494 ************************************ 00:04:58.494 18:55:50 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:04:58.494 * Looking for test storage... 00:04:58.494 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:58.494 18:55:50 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:58.494 18:55:50 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:58.494 18:55:50 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:04:58.494 18:55:50 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.494 18:55:50 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.494 18:55:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:58.494 ************************************ 00:04:58.494 START TEST acl 00:04:58.494 ************************************ 00:04:58.494 18:55:50 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:04:58.494 * Looking for test storage... 00:04:58.494 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:58.494 18:55:50 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:58.494 18:55:50 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:58.494 18:55:50 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:58.494 18:55:50 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:58.494 18:55:50 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:58.494 18:55:50 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:58.494 18:55:50 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:58.494 18:55:50 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:58.494 18:55:50 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:58.494 18:55:50 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:58.494 18:55:50 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:58.494 18:55:50 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:58.494 18:55:50 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:58.494 18:55:50 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:58.494 18:55:50 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:58.494 18:55:50 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:01.793 18:55:54 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:01.793 18:55:54 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:01.793 18:55:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:01.793 18:55:54 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:01.793 18:55:54 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.793 18:55:54 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:05:05.088 Hugepages 00:05:05.088 node hugesize free / total 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.088 00:05:05.088 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.088 18:55:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:05:05.089 18:55:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:05.089 18:55:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:05.089 18:55:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.089 18:55:57 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:05:05.089 18:55:57 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:05.089 18:55:57 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.089 18:55:57 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.089 18:55:57 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:05.089 ************************************ 00:05:05.089 START TEST denied 00:05:05.089 ************************************ 00:05:05.089 18:55:57 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:05:05.089 18:55:57 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:05:05.089 18:55:57 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:05:05.089 18:55:57 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:05.089 18:55:57 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.089 18:55:57 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:08.380 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:05:08.380 18:56:00 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:05:08.380 18:56:00 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:08.380 18:56:00 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:08.380 18:56:00 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:05:08.380 18:56:00 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:05:08.380 18:56:00 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:08.380 18:56:00 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:08.380 18:56:00 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:08.380 18:56:00 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:08.380 18:56:00 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:12.578 00:05:12.578 real 0m7.256s 00:05:12.578 user 0m2.410s 00:05:12.578 sys 0m4.155s 00:05:12.578 18:56:04 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.578 18:56:04 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:12.578 ************************************ 00:05:12.578 END TEST denied 00:05:12.578 ************************************ 00:05:12.578 18:56:04 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:12.578 18:56:04 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.578 18:56:04 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.578 18:56:04 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:12.578 ************************************ 00:05:12.578 START TEST allowed 00:05:12.578 ************************************ 00:05:12.578 18:56:04 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:05:12.578 18:56:04 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:05:12.578 18:56:04 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:12.578 18:56:04 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:05:12.578 18:56:04 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.578 18:56:04 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:16.777 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:16.777 18:56:08 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:05:16.777 18:56:08 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:16.777 18:56:08 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:16.777 18:56:08 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:16.777 18:56:08 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:20.071 00:05:20.071 real 0m7.245s 00:05:20.071 user 0m2.311s 00:05:20.071 sys 0m4.106s 00:05:20.071 18:56:11 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.072 18:56:11 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:20.072 ************************************ 00:05:20.072 END TEST allowed 00:05:20.072 ************************************ 00:05:20.072 00:05:20.072 real 0m20.994s 00:05:20.072 user 0m7.185s 00:05:20.072 sys 0m12.510s 00:05:20.072 18:56:11 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.072 18:56:11 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:20.072 ************************************ 00:05:20.072 END TEST acl 00:05:20.072 ************************************ 00:05:20.072 18:56:11 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:05:20.072 18:56:11 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.072 18:56:11 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.072 18:56:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:20.072 ************************************ 00:05:20.072 START TEST hugepages 00:05:20.072 ************************************ 00:05:20.072 18:56:11 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:05:20.072 * Looking for test storage... 00:05:20.072 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 196532016 kB' 'MemFree: 180802080 kB' 'MemAvailable: 180410908 kB' 'Buffers: 2508 kB' 'Cached: 7312532 kB' 'SwapCached: 0 kB' 'Active: 7865812 kB' 'Inactive: 275368 kB' 'Active(anon): 7476208 kB' 'Inactive(anon): 0 kB' 'Active(file): 389604 kB' 'Inactive(file): 275368 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 835484 kB' 'Mapped: 147988 kB' 'Shmem: 6650068 kB' 'KReclaimable: 207460 kB' 'Slab: 896192 kB' 'SReclaimable: 207460 kB' 'SUnreclaim: 688732 kB' 'KernelStack: 20512 kB' 'PageTables: 8772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 104557460 kB' 'Committed_AS: 8984952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 324412 kB' 'VmallocChunk: 0 kB' 'Percpu: 67968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1528788 kB' 'DirectMap2M: 18073600 kB' 'DirectMap1G: 182452224 kB' 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:20.073 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:20.074 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:20.074 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:20.074 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:20.074 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:20.074 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:20.074 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:20.074 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:20.074 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:20.074 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:20.074 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:20.074 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:20.074 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:20.074 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:20.074 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:20.074 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:20.074 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:20.074 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:20.074 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:20.074 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:20.074 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:20.074 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:20.074 18:56:12 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:20.074 18:56:12 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.074 18:56:12 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.074 18:56:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:20.074 ************************************ 00:05:20.074 START TEST default_setup 00:05:20.074 ************************************ 00:05:20.074 18:56:12 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:05:20.074 18:56:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:20.074 18:56:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:20.074 18:56:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:20.074 18:56:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:20.074 18:56:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:20.074 18:56:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:20.074 18:56:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:20.074 18:56:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:20.074 18:56:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:20.074 18:56:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:20.074 18:56:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:20.074 18:56:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:20.074 18:56:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:20.074 18:56:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:20.074 18:56:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:20.074 18:56:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:20.074 18:56:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:20.074 18:56:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:20.074 18:56:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:20.074 18:56:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:20.074 18:56:12 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.074 18:56:12 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:22.613 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:22.613 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:22.613 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:22.613 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:22.613 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:22.613 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:22.613 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:22.613 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:22.613 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:22.613 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:22.613 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:22.613 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:22.613 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:22.613 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:22.613 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:22.873 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:23.443 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:23.708 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:23.708 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:23.708 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:23.708 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:23.708 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:23.708 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:23.708 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:23.708 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:23.708 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:23.708 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:23.708 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:23.708 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:23.708 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:23.708 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.708 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.708 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.708 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.708 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.708 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.708 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 196532016 kB' 'MemFree: 182970160 kB' 'MemAvailable: 182578688 kB' 'Buffers: 2508 kB' 'Cached: 7312648 kB' 'SwapCached: 0 kB' 'Active: 7879976 kB' 'Inactive: 275368 kB' 'Active(anon): 7490372 kB' 'Inactive(anon): 0 kB' 'Active(file): 389604 kB' 'Inactive(file): 275368 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 849588 kB' 'Mapped: 147844 kB' 'Shmem: 6650184 kB' 'KReclaimable: 206860 kB' 'Slab: 894572 kB' 'SReclaimable: 206860 kB' 'SUnreclaim: 687712 kB' 'KernelStack: 20672 kB' 'PageTables: 8940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 105606036 kB' 'Committed_AS: 8998352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 324700 kB' 'VmallocChunk: 0 kB' 'Percpu: 67968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1528788 kB' 'DirectMap2M: 18073600 kB' 'DirectMap1G: 182452224 kB' 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.709 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 196532016 kB' 'MemFree: 182971424 kB' 'MemAvailable: 182579952 kB' 'Buffers: 2508 kB' 'Cached: 7312656 kB' 'SwapCached: 0 kB' 'Active: 7879940 kB' 'Inactive: 275368 kB' 'Active(anon): 7490336 kB' 'Inactive(anon): 0 kB' 'Active(file): 389604 kB' 'Inactive(file): 275368 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 849504 kB' 'Mapped: 147832 kB' 'Shmem: 6650192 kB' 'KReclaimable: 206860 kB' 'Slab: 894848 kB' 'SReclaimable: 206860 kB' 'SUnreclaim: 687988 kB' 'KernelStack: 20608 kB' 'PageTables: 8956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 105606036 kB' 'Committed_AS: 8998368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 324668 kB' 'VmallocChunk: 0 kB' 'Percpu: 67968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1528788 kB' 'DirectMap2M: 18073600 kB' 'DirectMap1G: 182452224 kB' 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.710 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.711 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 196532016 kB' 'MemFree: 182970952 kB' 'MemAvailable: 182579480 kB' 'Buffers: 2508 kB' 'Cached: 7312676 kB' 'SwapCached: 0 kB' 'Active: 7880400 kB' 'Inactive: 275368 kB' 'Active(anon): 7490796 kB' 'Inactive(anon): 0 kB' 'Active(file): 389604 kB' 'Inactive(file): 275368 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 849932 kB' 'Mapped: 147832 kB' 'Shmem: 6650212 kB' 'KReclaimable: 206860 kB' 'Slab: 894848 kB' 'SReclaimable: 206860 kB' 'SUnreclaim: 687988 kB' 'KernelStack: 20768 kB' 'PageTables: 9292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 105606036 kB' 'Committed_AS: 8996908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 324732 kB' 'VmallocChunk: 0 kB' 'Percpu: 67968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1528788 kB' 'DirectMap2M: 18073600 kB' 'DirectMap1G: 182452224 kB' 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.712 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.713 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:23.714 nr_hugepages=1024 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:23.714 resv_hugepages=0 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:23.714 surplus_hugepages=0 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:23.714 anon_hugepages=0 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 196532016 kB' 'MemFree: 182971136 kB' 'MemAvailable: 182579664 kB' 'Buffers: 2508 kB' 'Cached: 7312692 kB' 'SwapCached: 0 kB' 'Active: 7880280 kB' 'Inactive: 275368 kB' 'Active(anon): 7490676 kB' 'Inactive(anon): 0 kB' 'Active(file): 389604 kB' 'Inactive(file): 275368 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 849748 kB' 'Mapped: 147832 kB' 'Shmem: 6650228 kB' 'KReclaimable: 206860 kB' 'Slab: 894848 kB' 'SReclaimable: 206860 kB' 'SUnreclaim: 687988 kB' 'KernelStack: 20704 kB' 'PageTables: 9340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 105606036 kB' 'Committed_AS: 8998412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 324668 kB' 'VmallocChunk: 0 kB' 'Percpu: 67968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1528788 kB' 'DirectMap2M: 18073600 kB' 'DirectMap1G: 182452224 kB' 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.714 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.715 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97658644 kB' 'MemFree: 88942468 kB' 'MemUsed: 8716176 kB' 'SwapCached: 0 kB' 'Active: 5398664 kB' 'Inactive: 90960 kB' 'Active(anon): 5254196 kB' 'Inactive(anon): 0 kB' 'Active(file): 144468 kB' 'Inactive(file): 90960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5249544 kB' 'Mapped: 60312 kB' 'AnonPages: 249204 kB' 'Shmem: 5014116 kB' 'KernelStack: 9672 kB' 'PageTables: 4076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122264 kB' 'Slab: 459144 kB' 'SReclaimable: 122264 kB' 'SUnreclaim: 336880 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.716 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.717 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:23.979 node0=1024 expecting 1024 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:23.979 00:05:23.979 real 0m4.068s 00:05:23.979 user 0m1.295s 00:05:23.979 sys 0m2.038s 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.979 18:56:16 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:23.979 ************************************ 00:05:23.979 END TEST default_setup 00:05:23.979 ************************************ 00:05:23.979 18:56:16 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:23.979 18:56:16 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.979 18:56:16 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.979 18:56:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:23.979 ************************************ 00:05:23.979 START TEST per_node_1G_alloc 00:05:23.979 ************************************ 00:05:23.979 18:56:16 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:05:23.979 18:56:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:23.979 18:56:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:05:23.979 18:56:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:23.979 18:56:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:05:23.979 18:56:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:23.979 18:56:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:05:23.979 18:56:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:23.979 18:56:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:23.979 18:56:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:23.979 18:56:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:05:23.979 18:56:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:05:23.979 18:56:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:23.979 18:56:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:23.979 18:56:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:23.979 18:56:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:23.979 18:56:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:23.979 18:56:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:05:23.979 18:56:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:23.979 18:56:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:23.979 18:56:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:23.979 18:56:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:23.979 18:56:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:23.979 18:56:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:23.979 18:56:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:05:23.979 18:56:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:23.979 18:56:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.979 18:56:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:27.279 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:27.279 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:27.279 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:27.279 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:27.279 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:27.279 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:27.279 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:27.279 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:27.279 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:27.279 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:27.279 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:27.279 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:27.279 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:27.279 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:27.279 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:27.279 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:27.279 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 196532016 kB' 'MemFree: 182979660 kB' 'MemAvailable: 182588188 kB' 'Buffers: 2508 kB' 'Cached: 7312784 kB' 'SwapCached: 0 kB' 'Active: 7879656 kB' 'Inactive: 275368 kB' 'Active(anon): 7490052 kB' 'Inactive(anon): 0 kB' 'Active(file): 389604 kB' 'Inactive(file): 275368 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 848884 kB' 'Mapped: 147988 kB' 'Shmem: 6650320 kB' 'KReclaimable: 206860 kB' 'Slab: 894588 kB' 'SReclaimable: 206860 kB' 'SUnreclaim: 687728 kB' 'KernelStack: 20528 kB' 'PageTables: 8592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 105606036 kB' 'Committed_AS: 8996172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 324620 kB' 'VmallocChunk: 0 kB' 'Percpu: 67968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1528788 kB' 'DirectMap2M: 18073600 kB' 'DirectMap1G: 182452224 kB' 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.279 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.280 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 196532016 kB' 'MemFree: 182979740 kB' 'MemAvailable: 182588268 kB' 'Buffers: 2508 kB' 'Cached: 7312796 kB' 'SwapCached: 0 kB' 'Active: 7880096 kB' 'Inactive: 275368 kB' 'Active(anon): 7490492 kB' 'Inactive(anon): 0 kB' 'Active(file): 389604 kB' 'Inactive(file): 275368 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 849396 kB' 'Mapped: 147880 kB' 'Shmem: 6650332 kB' 'KReclaimable: 206860 kB' 'Slab: 894528 kB' 'SReclaimable: 206860 kB' 'SUnreclaim: 687668 kB' 'KernelStack: 20560 kB' 'PageTables: 8744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 105606036 kB' 'Committed_AS: 8996556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 324604 kB' 'VmallocChunk: 0 kB' 'Percpu: 67968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1528788 kB' 'DirectMap2M: 18073600 kB' 'DirectMap1G: 182452224 kB' 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.281 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.282 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 196532016 kB' 'MemFree: 182980892 kB' 'MemAvailable: 182589420 kB' 'Buffers: 2508 kB' 'Cached: 7312816 kB' 'SwapCached: 0 kB' 'Active: 7879840 kB' 'Inactive: 275368 kB' 'Active(anon): 7490236 kB' 'Inactive(anon): 0 kB' 'Active(file): 389604 kB' 'Inactive(file): 275368 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 849120 kB' 'Mapped: 147828 kB' 'Shmem: 6650352 kB' 'KReclaimable: 206860 kB' 'Slab: 894524 kB' 'SReclaimable: 206860 kB' 'SUnreclaim: 687664 kB' 'KernelStack: 20432 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 105606036 kB' 'Committed_AS: 8996580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 324604 kB' 'VmallocChunk: 0 kB' 'Percpu: 67968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1528788 kB' 'DirectMap2M: 18073600 kB' 'DirectMap1G: 182452224 kB' 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.283 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.284 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:27.285 nr_hugepages=1024 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:27.285 resv_hugepages=0 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:27.285 surplus_hugepages=0 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:27.285 anon_hugepages=0 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 196532016 kB' 'MemFree: 182981116 kB' 'MemAvailable: 182589644 kB' 'Buffers: 2508 kB' 'Cached: 7312836 kB' 'SwapCached: 0 kB' 'Active: 7879868 kB' 'Inactive: 275368 kB' 'Active(anon): 7490264 kB' 'Inactive(anon): 0 kB' 'Active(file): 389604 kB' 'Inactive(file): 275368 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 849124 kB' 'Mapped: 147828 kB' 'Shmem: 6650372 kB' 'KReclaimable: 206860 kB' 'Slab: 894524 kB' 'SReclaimable: 206860 kB' 'SUnreclaim: 687664 kB' 'KernelStack: 20432 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 105606036 kB' 'Committed_AS: 8996604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 324604 kB' 'VmallocChunk: 0 kB' 'Percpu: 67968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1528788 kB' 'DirectMap2M: 18073600 kB' 'DirectMap1G: 182452224 kB' 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.285 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.286 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97658644 kB' 'MemFree: 90001520 kB' 'MemUsed: 7657124 kB' 'SwapCached: 0 kB' 'Active: 5399384 kB' 'Inactive: 90960 kB' 'Active(anon): 5254916 kB' 'Inactive(anon): 0 kB' 'Active(file): 144468 kB' 'Inactive(file): 90960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5249576 kB' 'Mapped: 60308 kB' 'AnonPages: 249924 kB' 'Shmem: 5014148 kB' 'KernelStack: 9304 kB' 'PageTables: 3076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122264 kB' 'Slab: 458960 kB' 'SReclaimable: 122264 kB' 'SUnreclaim: 336696 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.287 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.288 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 98873372 kB' 'MemFree: 92980252 kB' 'MemUsed: 5893120 kB' 'SwapCached: 0 kB' 'Active: 2480540 kB' 'Inactive: 184408 kB' 'Active(anon): 2235404 kB' 'Inactive(anon): 0 kB' 'Active(file): 245136 kB' 'Inactive(file): 184408 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2065816 kB' 'Mapped: 87520 kB' 'AnonPages: 599204 kB' 'Shmem: 1636272 kB' 'KernelStack: 11128 kB' 'PageTables: 5288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84596 kB' 'Slab: 435564 kB' 'SReclaimable: 84596 kB' 'SUnreclaim: 350968 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.289 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:27.290 node0=512 expecting 512 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:27.290 node1=512 expecting 512 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:27.290 00:05:27.290 real 0m3.140s 00:05:27.290 user 0m1.255s 00:05:27.290 sys 0m1.956s 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.290 18:56:19 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:27.290 ************************************ 00:05:27.290 END TEST per_node_1G_alloc 00:05:27.290 ************************************ 00:05:27.290 18:56:19 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:27.290 18:56:19 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.290 18:56:19 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.290 18:56:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:27.290 ************************************ 00:05:27.290 START TEST even_2G_alloc 00:05:27.290 ************************************ 00:05:27.290 18:56:19 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:05:27.290 18:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:27.290 18:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:27.290 18:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:27.290 18:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:27.290 18:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:27.290 18:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:27.290 18:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:27.290 18:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:27.290 18:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:27.290 18:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:27.290 18:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:27.290 18:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:27.290 18:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:27.290 18:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:27.290 18:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:27.290 18:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:27.290 18:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:05:27.290 18:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:27.290 18:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:27.290 18:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:27.290 18:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:27.290 18:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:27.290 18:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:27.290 18:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:27.290 18:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:27.290 18:56:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:27.290 18:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:27.290 18:56:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:29.830 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:29.830 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:29.830 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:29.830 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:29.830 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:29.830 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:29.830 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:29.830 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:29.830 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:29.830 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:29.830 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:29.830 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:29.830 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:29.830 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:29.830 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:29.830 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:29.830 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 196532016 kB' 'MemFree: 183002016 kB' 'MemAvailable: 182610536 kB' 'Buffers: 2508 kB' 'Cached: 7312944 kB' 'SwapCached: 0 kB' 'Active: 7880660 kB' 'Inactive: 275368 kB' 'Active(anon): 7491056 kB' 'Inactive(anon): 0 kB' 'Active(file): 389604 kB' 'Inactive(file): 275368 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 849784 kB' 'Mapped: 146924 kB' 'Shmem: 6650480 kB' 'KReclaimable: 206844 kB' 'Slab: 893820 kB' 'SReclaimable: 206844 kB' 'SUnreclaim: 686976 kB' 'KernelStack: 20448 kB' 'PageTables: 8340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 105606036 kB' 'Committed_AS: 8989520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 324684 kB' 'VmallocChunk: 0 kB' 'Percpu: 67968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1528788 kB' 'DirectMap2M: 18073600 kB' 'DirectMap1G: 182452224 kB' 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.095 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.096 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 196532016 kB' 'MemFree: 183001940 kB' 'MemAvailable: 182610460 kB' 'Buffers: 2508 kB' 'Cached: 7312944 kB' 'SwapCached: 0 kB' 'Active: 7880200 kB' 'Inactive: 275368 kB' 'Active(anon): 7490596 kB' 'Inactive(anon): 0 kB' 'Active(file): 389604 kB' 'Inactive(file): 275368 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 849300 kB' 'Mapped: 146828 kB' 'Shmem: 6650480 kB' 'KReclaimable: 206844 kB' 'Slab: 893796 kB' 'SReclaimable: 206844 kB' 'SUnreclaim: 686952 kB' 'KernelStack: 20432 kB' 'PageTables: 8220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 105606036 kB' 'Committed_AS: 8989536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 324668 kB' 'VmallocChunk: 0 kB' 'Percpu: 67968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1528788 kB' 'DirectMap2M: 18073600 kB' 'DirectMap1G: 182452224 kB' 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.097 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.098 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 196532016 kB' 'MemFree: 183001060 kB' 'MemAvailable: 182609580 kB' 'Buffers: 2508 kB' 'Cached: 7312964 kB' 'SwapCached: 0 kB' 'Active: 7879832 kB' 'Inactive: 275368 kB' 'Active(anon): 7490228 kB' 'Inactive(anon): 0 kB' 'Active(file): 389604 kB' 'Inactive(file): 275368 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 848964 kB' 'Mapped: 146828 kB' 'Shmem: 6650500 kB' 'KReclaimable: 206844 kB' 'Slab: 893796 kB' 'SReclaimable: 206844 kB' 'SUnreclaim: 686952 kB' 'KernelStack: 20416 kB' 'PageTables: 8172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 105606036 kB' 'Committed_AS: 8989196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 324636 kB' 'VmallocChunk: 0 kB' 'Percpu: 67968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1528788 kB' 'DirectMap2M: 18073600 kB' 'DirectMap1G: 182452224 kB' 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.099 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.100 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:30.101 nr_hugepages=1024 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:30.101 resv_hugepages=0 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:30.101 surplus_hugepages=0 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:30.101 anon_hugepages=0 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 196532016 kB' 'MemFree: 183000808 kB' 'MemAvailable: 182609328 kB' 'Buffers: 2508 kB' 'Cached: 7312980 kB' 'SwapCached: 0 kB' 'Active: 7880376 kB' 'Inactive: 275368 kB' 'Active(anon): 7490772 kB' 'Inactive(anon): 0 kB' 'Active(file): 389604 kB' 'Inactive(file): 275368 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 848892 kB' 'Mapped: 146828 kB' 'Shmem: 6650516 kB' 'KReclaimable: 206844 kB' 'Slab: 893796 kB' 'SReclaimable: 206844 kB' 'SUnreclaim: 686952 kB' 'KernelStack: 20368 kB' 'PageTables: 7984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 105606036 kB' 'Committed_AS: 8989348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 324636 kB' 'VmallocChunk: 0 kB' 'Percpu: 67968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1528788 kB' 'DirectMap2M: 18073600 kB' 'DirectMap1G: 182452224 kB' 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.101 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.102 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97658644 kB' 'MemFree: 90010692 kB' 'MemUsed: 7647952 kB' 'SwapCached: 0 kB' 'Active: 5401892 kB' 'Inactive: 90960 kB' 'Active(anon): 5257424 kB' 'Inactive(anon): 0 kB' 'Active(file): 144468 kB' 'Inactive(file): 90960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5249600 kB' 'Mapped: 59624 kB' 'AnonPages: 252344 kB' 'Shmem: 5014172 kB' 'KernelStack: 9256 kB' 'PageTables: 2904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122264 kB' 'Slab: 458456 kB' 'SReclaimable: 122264 kB' 'SUnreclaim: 336192 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.103 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.365 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 98873372 kB' 'MemFree: 92989404 kB' 'MemUsed: 5883968 kB' 'SwapCached: 0 kB' 'Active: 2478272 kB' 'Inactive: 184408 kB' 'Active(anon): 2233136 kB' 'Inactive(anon): 0 kB' 'Active(file): 245136 kB' 'Inactive(file): 184408 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2065936 kB' 'Mapped: 87204 kB' 'AnonPages: 596824 kB' 'Shmem: 1636392 kB' 'KernelStack: 11128 kB' 'PageTables: 5128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84580 kB' 'Slab: 435340 kB' 'SReclaimable: 84580 kB' 'SUnreclaim: 350760 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.366 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:30.367 node0=512 expecting 512 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:30.367 node1=512 expecting 512 00:05:30.367 18:56:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:30.367 00:05:30.368 real 0m3.147s 00:05:30.368 user 0m1.318s 00:05:30.368 sys 0m1.900s 00:05:30.368 18:56:22 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.368 18:56:22 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:30.368 ************************************ 00:05:30.368 END TEST even_2G_alloc 00:05:30.368 ************************************ 00:05:30.368 18:56:22 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:30.368 18:56:22 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.368 18:56:22 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.368 18:56:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:30.368 ************************************ 00:05:30.368 START TEST odd_alloc 00:05:30.368 ************************************ 00:05:30.368 18:56:22 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:05:30.368 18:56:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:30.368 18:56:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:30.368 18:56:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:30.368 18:56:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:30.368 18:56:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:30.368 18:56:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:30.368 18:56:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:30.368 18:56:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:30.368 18:56:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:30.368 18:56:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:30.368 18:56:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:30.368 18:56:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:30.368 18:56:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:30.368 18:56:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:30.368 18:56:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:30.368 18:56:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:30.368 18:56:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:05:30.368 18:56:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:30.368 18:56:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:30.368 18:56:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:05:30.368 18:56:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:30.368 18:56:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:30.368 18:56:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:30.368 18:56:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:30.368 18:56:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:30.368 18:56:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:30.368 18:56:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:30.368 18:56:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:33.664 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:33.664 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:33.664 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:33.664 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:33.664 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:33.664 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:33.664 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:33.664 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:33.664 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:33.664 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:33.664 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:33.664 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:33.664 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:33.664 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:33.664 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:33.664 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:33.664 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:33.664 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 196532016 kB' 'MemFree: 183005460 kB' 'MemAvailable: 182613980 kB' 'Buffers: 2508 kB' 'Cached: 7313100 kB' 'SwapCached: 0 kB' 'Active: 7881468 kB' 'Inactive: 275368 kB' 'Active(anon): 7491864 kB' 'Inactive(anon): 0 kB' 'Active(file): 389604 kB' 'Inactive(file): 275368 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 850324 kB' 'Mapped: 146868 kB' 'Shmem: 6650636 kB' 'KReclaimable: 206844 kB' 'Slab: 893948 kB' 'SReclaimable: 206844 kB' 'SUnreclaim: 687104 kB' 'KernelStack: 20464 kB' 'PageTables: 8328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 105605012 kB' 'Committed_AS: 8990204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 324700 kB' 'VmallocChunk: 0 kB' 'Percpu: 67968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1528788 kB' 'DirectMap2M: 18073600 kB' 'DirectMap1G: 182452224 kB' 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.665 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 196532016 kB' 'MemFree: 183006132 kB' 'MemAvailable: 182614652 kB' 'Buffers: 2508 kB' 'Cached: 7313104 kB' 'SwapCached: 0 kB' 'Active: 7880972 kB' 'Inactive: 275368 kB' 'Active(anon): 7491368 kB' 'Inactive(anon): 0 kB' 'Active(file): 389604 kB' 'Inactive(file): 275368 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 849864 kB' 'Mapped: 146864 kB' 'Shmem: 6650640 kB' 'KReclaimable: 206844 kB' 'Slab: 893984 kB' 'SReclaimable: 206844 kB' 'SUnreclaim: 687140 kB' 'KernelStack: 20448 kB' 'PageTables: 8272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 105605012 kB' 'Committed_AS: 8990220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 324700 kB' 'VmallocChunk: 0 kB' 'Percpu: 67968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1528788 kB' 'DirectMap2M: 18073600 kB' 'DirectMap1G: 182452224 kB' 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.666 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.667 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 196532016 kB' 'MemFree: 183005628 kB' 'MemAvailable: 182614148 kB' 'Buffers: 2508 kB' 'Cached: 7313124 kB' 'SwapCached: 0 kB' 'Active: 7881108 kB' 'Inactive: 275368 kB' 'Active(anon): 7491504 kB' 'Inactive(anon): 0 kB' 'Active(file): 389604 kB' 'Inactive(file): 275368 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 850028 kB' 'Mapped: 146864 kB' 'Shmem: 6650660 kB' 'KReclaimable: 206844 kB' 'Slab: 893984 kB' 'SReclaimable: 206844 kB' 'SUnreclaim: 687140 kB' 'KernelStack: 20448 kB' 'PageTables: 8272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 105605012 kB' 'Committed_AS: 8990240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 324700 kB' 'VmallocChunk: 0 kB' 'Percpu: 67968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1528788 kB' 'DirectMap2M: 18073600 kB' 'DirectMap1G: 182452224 kB' 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.668 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.669 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:33.670 nr_hugepages=1025 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:33.670 resv_hugepages=0 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:33.670 surplus_hugepages=0 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:33.670 anon_hugepages=0 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 196532016 kB' 'MemFree: 183006316 kB' 'MemAvailable: 182614836 kB' 'Buffers: 2508 kB' 'Cached: 7313144 kB' 'SwapCached: 0 kB' 'Active: 7881132 kB' 'Inactive: 275368 kB' 'Active(anon): 7491528 kB' 'Inactive(anon): 0 kB' 'Active(file): 389604 kB' 'Inactive(file): 275368 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 850028 kB' 'Mapped: 146864 kB' 'Shmem: 6650680 kB' 'KReclaimable: 206844 kB' 'Slab: 893984 kB' 'SReclaimable: 206844 kB' 'SUnreclaim: 687140 kB' 'KernelStack: 20448 kB' 'PageTables: 8272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 105605012 kB' 'Committed_AS: 8990260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 324700 kB' 'VmallocChunk: 0 kB' 'Percpu: 67968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1528788 kB' 'DirectMap2M: 18073600 kB' 'DirectMap1G: 182452224 kB' 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.670 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.671 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97658644 kB' 'MemFree: 90010384 kB' 'MemUsed: 7648260 kB' 'SwapCached: 0 kB' 'Active: 5403136 kB' 'Inactive: 90960 kB' 'Active(anon): 5258668 kB' 'Inactive(anon): 0 kB' 'Active(file): 144468 kB' 'Inactive(file): 90960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5249636 kB' 'Mapped: 59640 kB' 'AnonPages: 253500 kB' 'Shmem: 5014208 kB' 'KernelStack: 9272 kB' 'PageTables: 2944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122264 kB' 'Slab: 458416 kB' 'SReclaimable: 122264 kB' 'SUnreclaim: 336152 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.672 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.673 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 98873372 kB' 'MemFree: 92994168 kB' 'MemUsed: 5879204 kB' 'SwapCached: 0 kB' 'Active: 2478256 kB' 'Inactive: 184408 kB' 'Active(anon): 2233120 kB' 'Inactive(anon): 0 kB' 'Active(file): 245136 kB' 'Inactive(file): 184408 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2066056 kB' 'Mapped: 87200 kB' 'AnonPages: 596728 kB' 'Shmem: 1636512 kB' 'KernelStack: 11176 kB' 'PageTables: 5332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84580 kB' 'Slab: 435568 kB' 'SReclaimable: 84580 kB' 'SUnreclaim: 350988 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.674 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:05:33.675 node0=512 expecting 513 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:05:33.675 node1=513 expecting 512 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:05:33.675 00:05:33.675 real 0m3.114s 00:05:33.675 user 0m1.264s 00:05:33.675 sys 0m1.920s 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.675 18:56:25 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:33.675 ************************************ 00:05:33.675 END TEST odd_alloc 00:05:33.675 ************************************ 00:05:33.675 18:56:25 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:33.675 18:56:25 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.675 18:56:25 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.675 18:56:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:33.675 ************************************ 00:05:33.675 START TEST custom_alloc 00:05:33.675 ************************************ 00:05:33.675 18:56:25 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:05:33.675 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:33.675 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:33.675 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:33.675 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:33.675 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:33.675 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:33.675 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:33.675 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:33.675 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:33.675 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:33.675 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:33.675 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:33.675 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:33.675 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:33.675 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:33.675 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:33.675 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:33.675 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:33.675 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:33.675 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:33.675 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:33.675 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:05:33.675 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:33.675 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:33.675 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:33.675 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:33.675 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:33.675 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:33.675 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:33.676 18:56:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:36.214 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:36.214 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:36.214 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:36.214 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:36.214 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:36.214 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:36.214 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:36.214 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:36.214 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:36.214 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:36.214 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:36.214 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:36.214 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:36.214 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:36.214 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:36.214 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:36.214 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 196532016 kB' 'MemFree: 181936196 kB' 'MemAvailable: 181544716 kB' 'Buffers: 2508 kB' 'Cached: 7313252 kB' 'SwapCached: 0 kB' 'Active: 7882152 kB' 'Inactive: 275368 kB' 'Active(anon): 7492548 kB' 'Inactive(anon): 0 kB' 'Active(file): 389604 kB' 'Inactive(file): 275368 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 850884 kB' 'Mapped: 146876 kB' 'Shmem: 6650788 kB' 'KReclaimable: 206844 kB' 'Slab: 894608 kB' 'SReclaimable: 206844 kB' 'SUnreclaim: 687764 kB' 'KernelStack: 20544 kB' 'PageTables: 8532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 105081748 kB' 'Committed_AS: 8991860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 324844 kB' 'VmallocChunk: 0 kB' 'Percpu: 67968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1528788 kB' 'DirectMap2M: 18073600 kB' 'DirectMap1G: 182452224 kB' 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.479 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.480 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 196532016 kB' 'MemFree: 181938868 kB' 'MemAvailable: 181547388 kB' 'Buffers: 2508 kB' 'Cached: 7313252 kB' 'SwapCached: 0 kB' 'Active: 7882236 kB' 'Inactive: 275368 kB' 'Active(anon): 7492632 kB' 'Inactive(anon): 0 kB' 'Active(file): 389604 kB' 'Inactive(file): 275368 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 851492 kB' 'Mapped: 146872 kB' 'Shmem: 6650788 kB' 'KReclaimable: 206844 kB' 'Slab: 894692 kB' 'SReclaimable: 206844 kB' 'SUnreclaim: 687848 kB' 'KernelStack: 20480 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 105081748 kB' 'Committed_AS: 8993360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 324828 kB' 'VmallocChunk: 0 kB' 'Percpu: 67968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1528788 kB' 'DirectMap2M: 18073600 kB' 'DirectMap1G: 182452224 kB' 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.481 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.482 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 196532016 kB' 'MemFree: 181940384 kB' 'MemAvailable: 181548904 kB' 'Buffers: 2508 kB' 'Cached: 7313272 kB' 'SwapCached: 0 kB' 'Active: 7882144 kB' 'Inactive: 275368 kB' 'Active(anon): 7492540 kB' 'Inactive(anon): 0 kB' 'Active(file): 389604 kB' 'Inactive(file): 275368 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 850896 kB' 'Mapped: 146872 kB' 'Shmem: 6650808 kB' 'KReclaimable: 206844 kB' 'Slab: 894692 kB' 'SReclaimable: 206844 kB' 'SUnreclaim: 687848 kB' 'KernelStack: 20512 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 105081748 kB' 'Committed_AS: 8991896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 324796 kB' 'VmallocChunk: 0 kB' 'Percpu: 67968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1528788 kB' 'DirectMap2M: 18073600 kB' 'DirectMap1G: 182452224 kB' 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.483 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.484 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:36.485 nr_hugepages=1536 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:36.485 resv_hugepages=0 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:36.485 surplus_hugepages=0 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:36.485 anon_hugepages=0 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 196532016 kB' 'MemFree: 181938708 kB' 'MemAvailable: 181547228 kB' 'Buffers: 2508 kB' 'Cached: 7313296 kB' 'SwapCached: 0 kB' 'Active: 7882344 kB' 'Inactive: 275368 kB' 'Active(anon): 7492740 kB' 'Inactive(anon): 0 kB' 'Active(file): 389604 kB' 'Inactive(file): 275368 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 851016 kB' 'Mapped: 146872 kB' 'Shmem: 6650832 kB' 'KReclaimable: 206844 kB' 'Slab: 894692 kB' 'SReclaimable: 206844 kB' 'SUnreclaim: 687848 kB' 'KernelStack: 20624 kB' 'PageTables: 8628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 105081748 kB' 'Committed_AS: 8993156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 324828 kB' 'VmallocChunk: 0 kB' 'Percpu: 67968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1528788 kB' 'DirectMap2M: 18073600 kB' 'DirectMap1G: 182452224 kB' 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.485 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.486 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97658644 kB' 'MemFree: 89987700 kB' 'MemUsed: 7670944 kB' 'SwapCached: 0 kB' 'Active: 5405520 kB' 'Inactive: 90960 kB' 'Active(anon): 5261052 kB' 'Inactive(anon): 0 kB' 'Active(file): 144468 kB' 'Inactive(file): 90960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5249712 kB' 'Mapped: 59672 kB' 'AnonPages: 255896 kB' 'Shmem: 5014284 kB' 'KernelStack: 9288 kB' 'PageTables: 3032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122264 kB' 'Slab: 458468 kB' 'SReclaimable: 122264 kB' 'SUnreclaim: 336204 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.487 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.488 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 98873372 kB' 'MemFree: 91949768 kB' 'MemUsed: 6923604 kB' 'SwapCached: 0 kB' 'Active: 2476980 kB' 'Inactive: 184408 kB' 'Active(anon): 2231844 kB' 'Inactive(anon): 0 kB' 'Active(file): 245136 kB' 'Inactive(file): 184408 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2066092 kB' 'Mapped: 87200 kB' 'AnonPages: 595304 kB' 'Shmem: 1636548 kB' 'KernelStack: 11480 kB' 'PageTables: 6124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84580 kB' 'Slab: 436224 kB' 'SReclaimable: 84580 kB' 'SUnreclaim: 351644 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.761 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:36.762 node0=512 expecting 512 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:36.762 node1=1024 expecting 1024 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:36.762 00:05:36.762 real 0m3.129s 00:05:36.762 user 0m1.305s 00:05:36.762 sys 0m1.893s 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.762 18:56:28 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:36.762 ************************************ 00:05:36.762 END TEST custom_alloc 00:05:36.762 ************************************ 00:05:36.762 18:56:29 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:36.762 18:56:29 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.762 18:56:29 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.762 18:56:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:36.762 ************************************ 00:05:36.762 START TEST no_shrink_alloc 00:05:36.762 ************************************ 00:05:36.762 18:56:29 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:05:36.762 18:56:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:36.762 18:56:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:36.762 18:56:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:36.762 18:56:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:36.762 18:56:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:36.762 18:56:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:36.762 18:56:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:36.762 18:56:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:36.762 18:56:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:36.762 18:56:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:36.762 18:56:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:36.762 18:56:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:36.762 18:56:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:36.762 18:56:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:36.762 18:56:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:36.762 18:56:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:36.762 18:56:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:36.762 18:56:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:36.762 18:56:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:36.762 18:56:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:36.762 18:56:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:36.762 18:56:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:40.063 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:40.063 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:40.063 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:40.063 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:40.063 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:40.063 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:40.063 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:40.063 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:40.063 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:40.063 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:40.063 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:40.063 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:40.063 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:40.063 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:40.063 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:40.063 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:40.063 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 196532016 kB' 'MemFree: 182977880 kB' 'MemAvailable: 182586400 kB' 'Buffers: 2508 kB' 'Cached: 7313408 kB' 'SwapCached: 0 kB' 'Active: 7880936 kB' 'Inactive: 275368 kB' 'Active(anon): 7491332 kB' 'Inactive(anon): 0 kB' 'Active(file): 389604 kB' 'Inactive(file): 275368 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 849684 kB' 'Mapped: 146960 kB' 'Shmem: 6650944 kB' 'KReclaimable: 206844 kB' 'Slab: 894952 kB' 'SReclaimable: 206844 kB' 'SUnreclaim: 688108 kB' 'KernelStack: 20432 kB' 'PageTables: 8244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 105606036 kB' 'Committed_AS: 8991392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 324620 kB' 'VmallocChunk: 0 kB' 'Percpu: 67968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1528788 kB' 'DirectMap2M: 18073600 kB' 'DirectMap1G: 182452224 kB' 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.063 18:56:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.063 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.063 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.063 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.063 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.063 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.063 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.063 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.063 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.063 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.063 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.063 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.063 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.063 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.063 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.063 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.064 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 196532016 kB' 'MemFree: 182978760 kB' 'MemAvailable: 182587264 kB' 'Buffers: 2508 kB' 'Cached: 7313412 kB' 'SwapCached: 0 kB' 'Active: 7881208 kB' 'Inactive: 275368 kB' 'Active(anon): 7491604 kB' 'Inactive(anon): 0 kB' 'Active(file): 389604 kB' 'Inactive(file): 275368 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 849928 kB' 'Mapped: 146872 kB' 'Shmem: 6650948 kB' 'KReclaimable: 206812 kB' 'Slab: 894824 kB' 'SReclaimable: 206812 kB' 'SUnreclaim: 688012 kB' 'KernelStack: 20448 kB' 'PageTables: 8228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 105606036 kB' 'Committed_AS: 8991412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 324588 kB' 'VmallocChunk: 0 kB' 'Percpu: 67968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1528788 kB' 'DirectMap2M: 18073600 kB' 'DirectMap1G: 182452224 kB' 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.065 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.066 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 196532016 kB' 'MemFree: 182979016 kB' 'MemAvailable: 182587520 kB' 'Buffers: 2508 kB' 'Cached: 7313428 kB' 'SwapCached: 0 kB' 'Active: 7881236 kB' 'Inactive: 275368 kB' 'Active(anon): 7491632 kB' 'Inactive(anon): 0 kB' 'Active(file): 389604 kB' 'Inactive(file): 275368 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 849928 kB' 'Mapped: 146872 kB' 'Shmem: 6650964 kB' 'KReclaimable: 206812 kB' 'Slab: 894824 kB' 'SReclaimable: 206812 kB' 'SUnreclaim: 688012 kB' 'KernelStack: 20448 kB' 'PageTables: 8228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 105606036 kB' 'Committed_AS: 8991432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 324588 kB' 'VmallocChunk: 0 kB' 'Percpu: 67968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1528788 kB' 'DirectMap2M: 18073600 kB' 'DirectMap1G: 182452224 kB' 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.067 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.068 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:40.069 nr_hugepages=1024 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:40.069 resv_hugepages=0 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:40.069 surplus_hugepages=0 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:40.069 anon_hugepages=0 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 196532016 kB' 'MemFree: 182979836 kB' 'MemAvailable: 182588340 kB' 'Buffers: 2508 kB' 'Cached: 7313452 kB' 'SwapCached: 0 kB' 'Active: 7881288 kB' 'Inactive: 275368 kB' 'Active(anon): 7491684 kB' 'Inactive(anon): 0 kB' 'Active(file): 389604 kB' 'Inactive(file): 275368 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 849452 kB' 'Mapped: 146872 kB' 'Shmem: 6650988 kB' 'KReclaimable: 206812 kB' 'Slab: 894820 kB' 'SReclaimable: 206812 kB' 'SUnreclaim: 688008 kB' 'KernelStack: 20432 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 105606036 kB' 'Committed_AS: 8991456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 324588 kB' 'VmallocChunk: 0 kB' 'Percpu: 67968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1528788 kB' 'DirectMap2M: 18073600 kB' 'DirectMap1G: 182452224 kB' 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.069 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:40.070 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97658644 kB' 'MemFree: 88956360 kB' 'MemUsed: 8702284 kB' 'SwapCached: 0 kB' 'Active: 5404320 kB' 'Inactive: 90960 kB' 'Active(anon): 5259852 kB' 'Inactive(anon): 0 kB' 'Active(file): 144468 kB' 'Inactive(file): 90960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5249872 kB' 'Mapped: 59668 kB' 'AnonPages: 254512 kB' 'Shmem: 5014444 kB' 'KernelStack: 9272 kB' 'PageTables: 2988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122232 kB' 'Slab: 458520 kB' 'SReclaimable: 122232 kB' 'SUnreclaim: 336288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.071 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:40.072 node0=1024 expecting 1024 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:40.072 18:56:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:42.612 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:42.612 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:42.613 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:42.613 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:42.613 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:42.613 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:42.613 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:42.613 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:42.613 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:42.613 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:42.613 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:42.613 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:42.613 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:42.613 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:42.613 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:42.613 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:42.613 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:42.613 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 196532016 kB' 'MemFree: 182955668 kB' 'MemAvailable: 182564172 kB' 'Buffers: 2508 kB' 'Cached: 7313552 kB' 'SwapCached: 0 kB' 'Active: 7881860 kB' 'Inactive: 275368 kB' 'Active(anon): 7492256 kB' 'Inactive(anon): 0 kB' 'Active(file): 389604 kB' 'Inactive(file): 275368 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 850380 kB' 'Mapped: 147076 kB' 'Shmem: 6651088 kB' 'KReclaimable: 206812 kB' 'Slab: 894744 kB' 'SReclaimable: 206812 kB' 'SUnreclaim: 687932 kB' 'KernelStack: 20512 kB' 'PageTables: 8448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 105606036 kB' 'Committed_AS: 8991920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 324700 kB' 'VmallocChunk: 0 kB' 'Percpu: 67968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1528788 kB' 'DirectMap2M: 18073600 kB' 'DirectMap1G: 182452224 kB' 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.613 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.614 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.878 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.878 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.878 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.878 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.878 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.878 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 196532016 kB' 'MemFree: 182956292 kB' 'MemAvailable: 182564796 kB' 'Buffers: 2508 kB' 'Cached: 7313556 kB' 'SwapCached: 0 kB' 'Active: 7881996 kB' 'Inactive: 275368 kB' 'Active(anon): 7492392 kB' 'Inactive(anon): 0 kB' 'Active(file): 389604 kB' 'Inactive(file): 275368 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 850604 kB' 'Mapped: 146980 kB' 'Shmem: 6651092 kB' 'KReclaimable: 206812 kB' 'Slab: 894712 kB' 'SReclaimable: 206812 kB' 'SUnreclaim: 687900 kB' 'KernelStack: 20496 kB' 'PageTables: 8368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 105606036 kB' 'Committed_AS: 8991940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 324684 kB' 'VmallocChunk: 0 kB' 'Percpu: 67968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1528788 kB' 'DirectMap2M: 18073600 kB' 'DirectMap1G: 182452224 kB' 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.879 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.880 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 196532016 kB' 'MemFree: 182957204 kB' 'MemAvailable: 182565708 kB' 'Buffers: 2508 kB' 'Cached: 7313572 kB' 'SwapCached: 0 kB' 'Active: 7882016 kB' 'Inactive: 275368 kB' 'Active(anon): 7492412 kB' 'Inactive(anon): 0 kB' 'Active(file): 389604 kB' 'Inactive(file): 275368 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 850608 kB' 'Mapped: 146980 kB' 'Shmem: 6651108 kB' 'KReclaimable: 206812 kB' 'Slab: 894712 kB' 'SReclaimable: 206812 kB' 'SUnreclaim: 687900 kB' 'KernelStack: 20496 kB' 'PageTables: 8368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 105606036 kB' 'Committed_AS: 8991960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 324684 kB' 'VmallocChunk: 0 kB' 'Percpu: 67968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1528788 kB' 'DirectMap2M: 18073600 kB' 'DirectMap1G: 182452224 kB' 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.881 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.882 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:42.883 nr_hugepages=1024 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:42.883 resv_hugepages=0 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:42.883 surplus_hugepages=0 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:42.883 anon_hugepages=0 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 196532016 kB' 'MemFree: 182956448 kB' 'MemAvailable: 182564952 kB' 'Buffers: 2508 kB' 'Cached: 7313596 kB' 'SwapCached: 0 kB' 'Active: 7882024 kB' 'Inactive: 275368 kB' 'Active(anon): 7492420 kB' 'Inactive(anon): 0 kB' 'Active(file): 389604 kB' 'Inactive(file): 275368 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 850608 kB' 'Mapped: 146980 kB' 'Shmem: 6651132 kB' 'KReclaimable: 206812 kB' 'Slab: 894712 kB' 'SReclaimable: 206812 kB' 'SUnreclaim: 687900 kB' 'KernelStack: 20496 kB' 'PageTables: 8368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 105606036 kB' 'Committed_AS: 8991984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 324684 kB' 'VmallocChunk: 0 kB' 'Percpu: 67968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1528788 kB' 'DirectMap2M: 18073600 kB' 'DirectMap1G: 182452224 kB' 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.883 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.884 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97658644 kB' 'MemFree: 88921988 kB' 'MemUsed: 8736656 kB' 'SwapCached: 0 kB' 'Active: 5405684 kB' 'Inactive: 90960 kB' 'Active(anon): 5261216 kB' 'Inactive(anon): 0 kB' 'Active(file): 144468 kB' 'Inactive(file): 90960 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5249952 kB' 'Mapped: 59672 kB' 'AnonPages: 255328 kB' 'Shmem: 5014524 kB' 'KernelStack: 9304 kB' 'PageTables: 3036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122232 kB' 'Slab: 458536 kB' 'SReclaimable: 122232 kB' 'SUnreclaim: 336304 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.885 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:42.886 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:42.887 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:42.887 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:42.887 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:42.887 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:42.887 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:42.887 node0=1024 expecting 1024 00:05:42.887 18:56:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:42.887 00:05:42.887 real 0m6.175s 00:05:42.887 user 0m2.484s 00:05:42.887 sys 0m3.826s 00:05:42.887 18:56:35 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.887 18:56:35 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:42.887 ************************************ 00:05:42.887 END TEST no_shrink_alloc 00:05:42.887 ************************************ 00:05:42.887 18:56:35 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:42.887 18:56:35 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:42.887 18:56:35 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:42.887 18:56:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:42.887 18:56:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:42.887 18:56:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:42.887 18:56:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:42.887 18:56:35 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:42.887 18:56:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:42.887 18:56:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:42.887 18:56:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:42.887 18:56:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:42.887 18:56:35 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:42.887 18:56:35 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:42.887 00:05:42.887 real 0m23.337s 00:05:42.887 user 0m9.167s 00:05:42.887 sys 0m13.886s 00:05:42.887 18:56:35 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.887 18:56:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:42.887 ************************************ 00:05:42.887 END TEST hugepages 00:05:42.887 ************************************ 00:05:42.887 18:56:35 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:05:42.887 18:56:35 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.887 18:56:35 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.887 18:56:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:42.887 ************************************ 00:05:42.887 START TEST driver 00:05:42.887 ************************************ 00:05:42.887 18:56:35 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:05:43.147 * Looking for test storage... 00:05:43.147 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:05:43.147 18:56:35 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:43.147 18:56:35 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:43.147 18:56:35 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:47.347 18:56:39 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:47.347 18:56:39 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.347 18:56:39 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.347 18:56:39 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:47.347 ************************************ 00:05:47.347 START TEST guess_driver 00:05:47.347 ************************************ 00:05:47.347 18:56:39 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:05:47.347 18:56:39 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:47.347 18:56:39 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:47.347 18:56:39 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:47.347 18:56:39 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:47.347 18:56:39 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:47.347 18:56:39 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:47.347 18:56:39 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:47.347 18:56:39 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:47.347 18:56:39 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:47.347 18:56:39 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 177 > 0 )) 00:05:47.347 18:56:39 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:47.347 18:56:39 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:47.347 18:56:39 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:47.347 18:56:39 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:47.347 18:56:39 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:47.347 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:47.347 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:47.347 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:47.347 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:47.347 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:47.347 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:47.347 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:47.347 18:56:39 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:47.347 18:56:39 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:47.347 18:56:39 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:47.347 18:56:39 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:47.347 18:56:39 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:47.347 Looking for driver=vfio-pci 00:05:47.347 18:56:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:47.347 18:56:39 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:47.347 18:56:39 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:47.347 18:56:39 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:50.644 18:56:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:51.214 18:56:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:51.214 18:56:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:51.214 18:56:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:51.214 18:56:43 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:51.214 18:56:43 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:51.214 18:56:43 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:51.215 18:56:43 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:55.414 00:05:55.414 real 0m8.168s 00:05:55.414 user 0m2.442s 00:05:55.414 sys 0m4.252s 00:05:55.414 18:56:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.414 18:56:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:55.414 ************************************ 00:05:55.414 END TEST guess_driver 00:05:55.414 ************************************ 00:05:55.414 00:05:55.414 real 0m12.541s 00:05:55.414 user 0m3.749s 00:05:55.414 sys 0m6.523s 00:05:55.414 18:56:47 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.414 18:56:47 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:55.414 ************************************ 00:05:55.414 END TEST driver 00:05:55.414 ************************************ 00:05:55.674 18:56:47 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:05:55.674 18:56:47 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.674 18:56:47 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.674 18:56:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:55.674 ************************************ 00:05:55.674 START TEST devices 00:05:55.674 ************************************ 00:05:55.674 18:56:47 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:05:55.674 * Looking for test storage... 00:05:55.674 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:05:55.674 18:56:48 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:55.674 18:56:48 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:55.674 18:56:48 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:55.674 18:56:48 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:58.969 18:56:51 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:58.969 18:56:51 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:58.969 18:56:51 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:58.969 18:56:51 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:58.970 18:56:51 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:58.970 18:56:51 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:58.970 18:56:51 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:58.970 18:56:51 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:58.970 18:56:51 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:58.970 18:56:51 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:58.970 18:56:51 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:58.970 18:56:51 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:58.970 18:56:51 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:58.970 18:56:51 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:58.970 18:56:51 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:58.970 18:56:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:58.970 18:56:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:58.970 18:56:51 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:05:58.970 18:56:51 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:05:58.970 18:56:51 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:58.970 18:56:51 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:58.970 18:56:51 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:58.970 No valid GPT data, bailing 00:05:58.970 18:56:51 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:58.970 18:56:51 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:58.970 18:56:51 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:58.970 18:56:51 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:58.970 18:56:51 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:58.970 18:56:51 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:58.970 18:56:51 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:05:58.970 18:56:51 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:58.970 18:56:51 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:58.970 18:56:51 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:05:58.970 18:56:51 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:58.970 18:56:51 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:58.970 18:56:51 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:58.970 18:56:51 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.970 18:56:51 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.970 18:56:51 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:58.970 ************************************ 00:05:58.970 START TEST nvme_mount 00:05:58.970 ************************************ 00:05:58.970 18:56:51 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:05:58.970 18:56:51 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:58.970 18:56:51 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:58.970 18:56:51 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:58.970 18:56:51 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:58.970 18:56:51 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:58.970 18:56:51 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:58.970 18:56:51 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:58.970 18:56:51 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:58.970 18:56:51 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:58.970 18:56:51 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:58.970 18:56:51 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:58.970 18:56:51 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:58.970 18:56:51 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:58.970 18:56:51 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:58.970 18:56:51 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:58.970 18:56:51 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:58.970 18:56:51 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:58.970 18:56:51 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:58.970 18:56:51 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:06:00.352 Creating new GPT entries in memory. 00:06:00.352 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:00.352 other utilities. 00:06:00.352 18:56:52 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:00.352 18:56:52 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:00.352 18:56:52 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:00.352 18:56:52 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:00.352 18:56:52 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:06:01.293 Creating new GPT entries in memory. 00:06:01.293 The operation has completed successfully. 00:06:01.293 18:56:53 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:01.293 18:56:53 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:01.293 18:56:53 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 588380 00:06:01.293 18:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:01.293 18:56:53 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:06:01.293 18:56:53 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:01.293 18:56:53 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:06:01.293 18:56:53 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:06:01.293 18:56:53 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:01.293 18:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:01.293 18:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:06:01.293 18:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:06:01.293 18:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:01.293 18:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:01.293 18:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:01.293 18:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:01.293 18:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:01.293 18:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:01.293 18:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:01.293 18:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:06:01.293 18:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:01.293 18:56:53 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:01.293 18:56:53 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:06:03.832 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:03.832 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:06:03.833 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:03.833 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:03.833 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:03.833 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:03.833 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:03.833 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:03.833 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:03.833 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:03.833 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:03.833 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:03.833 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:03.833 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:03.833 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:03.833 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:03.833 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:03.833 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:03.833 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:03.833 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:03.833 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:03.833 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:03.833 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:03.833 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:03.833 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:03.833 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:03.833 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:03.833 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:03.833 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:03.833 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:03.833 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:03.833 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:03.833 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:03.833 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:03.833 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:03.833 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:04.093 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:04.093 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:06:04.093 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:04.093 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:04.093 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:04.093 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:06:04.093 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:04.093 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:04.093 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:04.093 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:04.093 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:04.093 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:04.093 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:04.352 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:06:04.352 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:06:04.352 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:04.352 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:04.352 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:06:04.352 18:56:56 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:06:04.352 18:56:56 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:04.352 18:56:56 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:06:04.352 18:56:56 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:06:04.352 18:56:56 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:04.612 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:04.612 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:06:04.612 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:06:04.612 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:04.612 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:04.612 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:04.612 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:04.612 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:04.612 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:04.612 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:04.612 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:06:04.612 18:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:04.612 18:56:56 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:04.612 18:56:56 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:06:07.153 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:07.153 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:06:07.153 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:07.153 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.153 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:07.153 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.153 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:07.153 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.153 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:07.153 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.153 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:07.153 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.153 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:07.153 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.153 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:07.153 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.153 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:07.153 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.153 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:07.153 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.153 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:07.153 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.153 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:07.153 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.153 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:07.153 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.153 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:07.154 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.154 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:07.154 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.154 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:07.154 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.154 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:07.154 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.154 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:07.154 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.413 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:07.413 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:06:07.413 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:07.413 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:07.413 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:07.413 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:07.413 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:06:07.413 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:06:07.413 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:06:07.413 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:07.413 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:06:07.413 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:07.413 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:07.414 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:07.414 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.414 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:06:07.414 18:56:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:07.414 18:56:59 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:07.414 18:56:59 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:06:10.716 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:10.716 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:10.716 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:10.716 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.716 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:10.716 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.716 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:10.716 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.716 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:10.716 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.716 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:10.716 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.716 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:10.717 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.717 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:10.717 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.717 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:10.717 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.717 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:10.717 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.717 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:10.717 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.717 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:10.717 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.717 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:10.717 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.717 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:10.717 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.717 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:10.717 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.717 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:10.717 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.717 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:10.717 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.717 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:10.717 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.717 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:10.717 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:10.717 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:06:10.717 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:06:10.717 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:10.717 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:10.717 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:10.717 18:57:02 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:10.717 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:10.717 00:06:10.717 real 0m11.334s 00:06:10.717 user 0m3.397s 00:06:10.717 sys 0m5.756s 00:06:10.717 18:57:02 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.717 18:57:02 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:06:10.717 ************************************ 00:06:10.717 END TEST nvme_mount 00:06:10.717 ************************************ 00:06:10.717 18:57:02 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:10.717 18:57:02 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.717 18:57:02 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.717 18:57:02 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:10.717 ************************************ 00:06:10.717 START TEST dm_mount 00:06:10.717 ************************************ 00:06:10.717 18:57:02 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:06:10.717 18:57:02 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:10.717 18:57:02 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:10.717 18:57:02 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:10.717 18:57:02 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:10.717 18:57:02 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:10.717 18:57:02 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:06:10.717 18:57:02 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:10.717 18:57:02 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:10.717 18:57:02 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:06:10.717 18:57:02 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:06:10.717 18:57:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:10.717 18:57:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:10.717 18:57:02 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:10.717 18:57:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:10.717 18:57:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:10.717 18:57:02 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:10.717 18:57:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:10.717 18:57:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:10.717 18:57:02 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:06:10.717 18:57:02 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:10.717 18:57:02 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:06:11.700 Creating new GPT entries in memory. 00:06:11.700 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:11.700 other utilities. 00:06:11.700 18:57:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:11.700 18:57:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:11.700 18:57:03 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:11.700 18:57:03 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:11.700 18:57:03 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:06:12.640 Creating new GPT entries in memory. 00:06:12.640 The operation has completed successfully. 00:06:12.640 18:57:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:12.640 18:57:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:12.640 18:57:04 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:12.640 18:57:04 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:12.640 18:57:04 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:06:13.579 The operation has completed successfully. 00:06:13.579 18:57:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:13.579 18:57:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:13.579 18:57:05 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 592746 00:06:13.579 18:57:05 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:13.579 18:57:05 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:06:13.579 18:57:05 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:13.579 18:57:05 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:13.579 18:57:05 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:06:13.579 18:57:05 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:13.579 18:57:05 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:06:13.579 18:57:05 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:13.579 18:57:05 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:13.579 18:57:05 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:06:13.579 18:57:05 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:06:13.579 18:57:05 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:06:13.579 18:57:05 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:06:13.579 18:57:05 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:06:13.579 18:57:05 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:06:13.579 18:57:05 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:06:13.579 18:57:05 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:13.579 18:57:05 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:13.579 18:57:05 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:06:13.579 18:57:06 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:13.579 18:57:06 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:06:13.579 18:57:06 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:13.579 18:57:06 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:06:13.579 18:57:06 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:13.579 18:57:06 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:13.579 18:57:06 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:06:13.579 18:57:06 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:06:13.579 18:57:06 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:13.579 18:57:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:13.579 18:57:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:06:13.579 18:57:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:13.579 18:57:06 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:13.579 18:57:06 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.869 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:16.870 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:06:16.870 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:06:16.870 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:06:16.870 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:16.870 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:06:16.870 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:06:16.870 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:06:16.870 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:06:16.870 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:16.870 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:06:16.870 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:16.870 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:16.870 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:16.870 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.870 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:06:16.870 18:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:16.870 18:57:08 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:16.870 18:57:08 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:19.420 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:19.679 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:19.679 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:19.679 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:19.679 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:19.679 18:57:11 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:19.679 00:06:19.679 real 0m9.143s 00:06:19.679 user 0m2.261s 00:06:19.679 sys 0m3.878s 00:06:19.679 18:57:11 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.679 18:57:11 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:06:19.679 ************************************ 00:06:19.679 END TEST dm_mount 00:06:19.679 ************************************ 00:06:19.679 18:57:11 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:06:19.679 18:57:11 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:06:19.679 18:57:11 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:19.679 18:57:11 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:19.679 18:57:11 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:19.680 18:57:11 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:19.680 18:57:11 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:19.938 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:06:19.938 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:06:19.938 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:19.938 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:19.938 18:57:12 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:06:19.938 18:57:12 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:06:19.938 18:57:12 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:19.938 18:57:12 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:19.938 18:57:12 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:19.938 18:57:12 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:19.938 18:57:12 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:19.938 00:06:19.938 real 0m24.299s 00:06:19.938 user 0m7.016s 00:06:19.938 sys 0m11.978s 00:06:19.938 18:57:12 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.938 18:57:12 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:19.938 ************************************ 00:06:19.938 END TEST devices 00:06:19.938 ************************************ 00:06:19.938 00:06:19.938 real 1m21.557s 00:06:19.938 user 0m27.264s 00:06:19.938 sys 0m45.165s 00:06:19.938 18:57:12 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.938 18:57:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:19.938 ************************************ 00:06:19.938 END TEST setup.sh 00:06:19.938 ************************************ 00:06:19.938 18:57:12 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:06:23.231 Hugepages 00:06:23.231 node hugesize free / total 00:06:23.231 node0 1048576kB 0 / 0 00:06:23.231 node0 2048kB 2048 / 2048 00:06:23.231 node1 1048576kB 0 / 0 00:06:23.231 node1 2048kB 0 / 0 00:06:23.231 00:06:23.231 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:23.231 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:06:23.231 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:06:23.231 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:06:23.231 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:06:23.231 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:06:23.231 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:06:23.231 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:06:23.231 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:06:23.231 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:06:23.231 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:06:23.231 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:06:23.231 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:06:23.231 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:06:23.231 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:06:23.231 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:06:23.231 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:06:23.231 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:06:23.231 18:57:15 -- spdk/autotest.sh@130 -- # uname -s 00:06:23.231 18:57:15 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:06:23.231 18:57:15 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:06:23.231 18:57:15 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:06:25.771 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:25.771 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:25.771 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:25.771 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:25.771 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:25.771 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:25.771 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:25.771 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:25.771 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:25.771 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:25.771 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:25.771 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:25.771 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:26.031 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:26.031 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:26.031 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:26.974 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:06:26.974 18:57:19 -- common/autotest_common.sh@1532 -- # sleep 1 00:06:27.913 18:57:20 -- common/autotest_common.sh@1533 -- # bdfs=() 00:06:27.913 18:57:20 -- common/autotest_common.sh@1533 -- # local bdfs 00:06:27.913 18:57:20 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:06:27.913 18:57:20 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:06:27.913 18:57:20 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:27.913 18:57:20 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:27.913 18:57:20 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:27.913 18:57:20 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:27.913 18:57:20 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:27.913 18:57:20 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:27.913 18:57:20 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:06:27.913 18:57:20 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:06:31.210 Waiting for block devices as requested 00:06:31.210 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:06:31.210 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:31.210 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:31.210 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:31.210 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:31.210 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:31.469 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:31.469 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:31.469 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:31.469 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:31.729 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:31.729 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:31.729 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:31.988 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:31.988 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:31.988 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:31.988 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:32.248 18:57:24 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:32.248 18:57:24 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:06:32.248 18:57:24 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:06:32.248 18:57:24 -- common/autotest_common.sh@1502 -- # grep 0000:5e:00.0/nvme/nvme 00:06:32.248 18:57:24 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:06:32.248 18:57:24 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:06:32.248 18:57:24 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:06:32.248 18:57:24 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:06:32.248 18:57:24 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:06:32.248 18:57:24 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:06:32.248 18:57:24 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:06:32.248 18:57:24 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:32.248 18:57:24 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:32.248 18:57:24 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:06:32.248 18:57:24 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:32.248 18:57:24 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:32.248 18:57:24 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:06:32.248 18:57:24 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:32.248 18:57:24 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:32.248 18:57:24 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:32.248 18:57:24 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:32.248 18:57:24 -- common/autotest_common.sh@1557 -- # continue 00:06:32.248 18:57:24 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:32.248 18:57:24 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:32.248 18:57:24 -- common/autotest_common.sh@10 -- # set +x 00:06:32.248 18:57:24 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:32.248 18:57:24 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:32.248 18:57:24 -- common/autotest_common.sh@10 -- # set +x 00:06:32.248 18:57:24 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:06:35.544 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:35.544 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:35.544 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:35.544 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:35.544 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:35.544 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:35.544 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:35.544 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:35.544 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:35.544 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:35.544 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:35.544 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:35.544 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:35.544 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:35.544 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:35.544 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:36.113 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:06:36.113 18:57:28 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:36.113 18:57:28 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:36.113 18:57:28 -- common/autotest_common.sh@10 -- # set +x 00:06:36.113 18:57:28 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:36.113 18:57:28 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:06:36.113 18:57:28 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:06:36.113 18:57:28 -- common/autotest_common.sh@1577 -- # bdfs=() 00:06:36.113 18:57:28 -- common/autotest_common.sh@1577 -- # local bdfs 00:06:36.113 18:57:28 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:06:36.113 18:57:28 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:36.113 18:57:28 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:36.113 18:57:28 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:36.113 18:57:28 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:36.113 18:57:28 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:36.373 18:57:28 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:36.373 18:57:28 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:06:36.373 18:57:28 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:36.373 18:57:28 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:06:36.373 18:57:28 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:06:36.373 18:57:28 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:36.373 18:57:28 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:06:36.373 18:57:28 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:5e:00.0 00:06:36.373 18:57:28 -- common/autotest_common.sh@1592 -- # [[ -z 0000:5e:00.0 ]] 00:06:36.373 18:57:28 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=602270 00:06:36.373 18:57:28 -- common/autotest_common.sh@1598 -- # waitforlisten 602270 00:06:36.373 18:57:28 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:36.373 18:57:28 -- common/autotest_common.sh@831 -- # '[' -z 602270 ']' 00:06:36.373 18:57:28 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.373 18:57:28 -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:36.373 18:57:28 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.373 18:57:28 -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:36.373 18:57:28 -- common/autotest_common.sh@10 -- # set +x 00:06:36.373 [2024-07-25 18:57:28.688609] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:36.373 [2024-07-25 18:57:28.688655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid602270 ] 00:06:36.373 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.373 [2024-07-25 18:57:28.758346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.373 [2024-07-25 18:57:28.830209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.311 18:57:29 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.311 18:57:29 -- common/autotest_common.sh@864 -- # return 0 00:06:37.311 18:57:29 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:06:37.311 18:57:29 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:06:37.311 18:57:29 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:06:40.601 nvme0n1 00:06:40.601 18:57:32 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:40.601 [2024-07-25 18:57:32.697903] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:06:40.601 request: 00:06:40.601 { 00:06:40.601 "nvme_ctrlr_name": "nvme0", 00:06:40.601 "password": "test", 00:06:40.601 "method": "bdev_nvme_opal_revert", 00:06:40.601 "req_id": 1 00:06:40.601 } 00:06:40.601 Got JSON-RPC error response 00:06:40.601 response: 00:06:40.601 { 00:06:40.601 "code": -32602, 00:06:40.601 "message": "Invalid parameters" 00:06:40.601 } 00:06:40.601 18:57:32 -- common/autotest_common.sh@1604 -- # true 00:06:40.601 18:57:32 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:06:40.601 18:57:32 -- common/autotest_common.sh@1608 -- # killprocess 602270 00:06:40.601 18:57:32 -- common/autotest_common.sh@950 -- # '[' -z 602270 ']' 00:06:40.601 18:57:32 -- common/autotest_common.sh@954 -- # kill -0 602270 00:06:40.601 18:57:32 -- common/autotest_common.sh@955 -- # uname 00:06:40.601 18:57:32 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:40.601 18:57:32 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 602270 00:06:40.601 18:57:32 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:40.601 18:57:32 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:40.601 18:57:32 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 602270' 00:06:40.601 killing process with pid 602270 00:06:40.601 18:57:32 -- common/autotest_common.sh@969 -- # kill 602270 00:06:40.601 18:57:32 -- common/autotest_common.sh@974 -- # wait 602270 00:06:41.980 18:57:34 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:41.980 18:57:34 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:41.980 18:57:34 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:41.980 18:57:34 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:41.980 18:57:34 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:41.980 18:57:34 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:41.980 18:57:34 -- common/autotest_common.sh@10 -- # set +x 00:06:41.980 18:57:34 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:41.980 18:57:34 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:06:41.980 18:57:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.980 18:57:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.980 18:57:34 -- common/autotest_common.sh@10 -- # set +x 00:06:41.980 ************************************ 00:06:41.980 START TEST env 00:06:41.980 ************************************ 00:06:41.980 18:57:34 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:06:42.240 * Looking for test storage... 00:06:42.240 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:06:42.240 18:57:34 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:06:42.240 18:57:34 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.240 18:57:34 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.240 18:57:34 env -- common/autotest_common.sh@10 -- # set +x 00:06:42.240 ************************************ 00:06:42.240 START TEST env_memory 00:06:42.240 ************************************ 00:06:42.240 18:57:34 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:06:42.240 00:06:42.240 00:06:42.240 CUnit - A unit testing framework for C - Version 2.1-3 00:06:42.240 http://cunit.sourceforge.net/ 00:06:42.240 00:06:42.240 00:06:42.240 Suite: memory 00:06:42.240 Test: alloc and free memory map ...[2024-07-25 18:57:34.573683] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:42.240 passed 00:06:42.240 Test: mem map translation ...[2024-07-25 18:57:34.592303] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:42.240 [2024-07-25 18:57:34.592317] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:42.240 [2024-07-25 18:57:34.592354] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:42.240 [2024-07-25 18:57:34.592364] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:42.240 passed 00:06:42.240 Test: mem map registration ...[2024-07-25 18:57:34.629145] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:42.240 [2024-07-25 18:57:34.629159] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:42.240 passed 00:06:42.240 Test: mem map adjacent registrations ...passed 00:06:42.240 00:06:42.240 Run Summary: Type Total Ran Passed Failed Inactive 00:06:42.240 suites 1 1 n/a 0 0 00:06:42.240 tests 4 4 4 0 0 00:06:42.240 asserts 152 152 152 0 n/a 00:06:42.240 00:06:42.240 Elapsed time = 0.135 seconds 00:06:42.240 00:06:42.240 real 0m0.147s 00:06:42.240 user 0m0.139s 00:06:42.240 sys 0m0.007s 00:06:42.240 18:57:34 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.240 18:57:34 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:42.240 ************************************ 00:06:42.240 END TEST env_memory 00:06:42.240 ************************************ 00:06:42.501 18:57:34 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:42.501 18:57:34 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.501 18:57:34 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.501 18:57:34 env -- common/autotest_common.sh@10 -- # set +x 00:06:42.501 ************************************ 00:06:42.501 START TEST env_vtophys 00:06:42.501 ************************************ 00:06:42.501 18:57:34 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:42.501 EAL: lib.eal log level changed from notice to debug 00:06:42.501 EAL: Detected lcore 0 as core 0 on socket 0 00:06:42.501 EAL: Detected lcore 1 as core 1 on socket 0 00:06:42.501 EAL: Detected lcore 2 as core 2 on socket 0 00:06:42.501 EAL: Detected lcore 3 as core 3 on socket 0 00:06:42.501 EAL: Detected lcore 4 as core 4 on socket 0 00:06:42.501 EAL: Detected lcore 5 as core 5 on socket 0 00:06:42.501 EAL: Detected lcore 6 as core 6 on socket 0 00:06:42.501 EAL: Detected lcore 7 as core 8 on socket 0 00:06:42.501 EAL: Detected lcore 8 as core 9 on socket 0 00:06:42.501 EAL: Detected lcore 9 as core 10 on socket 0 00:06:42.501 EAL: Detected lcore 10 as core 11 on socket 0 00:06:42.501 EAL: Detected lcore 11 as core 12 on socket 0 00:06:42.501 EAL: Detected lcore 12 as core 13 on socket 0 00:06:42.501 EAL: Detected lcore 13 as core 16 on socket 0 00:06:42.501 EAL: Detected lcore 14 as core 17 on socket 0 00:06:42.501 EAL: Detected lcore 15 as core 18 on socket 0 00:06:42.501 EAL: Detected lcore 16 as core 19 on socket 0 00:06:42.501 EAL: Detected lcore 17 as core 20 on socket 0 00:06:42.501 EAL: Detected lcore 18 as core 21 on socket 0 00:06:42.501 EAL: Detected lcore 19 as core 25 on socket 0 00:06:42.501 EAL: Detected lcore 20 as core 26 on socket 0 00:06:42.501 EAL: Detected lcore 21 as core 27 on socket 0 00:06:42.501 EAL: Detected lcore 22 as core 28 on socket 0 00:06:42.501 EAL: Detected lcore 23 as core 29 on socket 0 00:06:42.501 EAL: Detected lcore 24 as core 0 on socket 1 00:06:42.501 EAL: Detected lcore 25 as core 1 on socket 1 00:06:42.501 EAL: Detected lcore 26 as core 2 on socket 1 00:06:42.501 EAL: Detected lcore 27 as core 3 on socket 1 00:06:42.501 EAL: Detected lcore 28 as core 4 on socket 1 00:06:42.501 EAL: Detected lcore 29 as core 5 on socket 1 00:06:42.501 EAL: Detected lcore 30 as core 6 on socket 1 00:06:42.501 EAL: Detected lcore 31 as core 8 on socket 1 00:06:42.501 EAL: Detected lcore 32 as core 9 on socket 1 00:06:42.501 EAL: Detected lcore 33 as core 10 on socket 1 00:06:42.501 EAL: Detected lcore 34 as core 11 on socket 1 00:06:42.501 EAL: Detected lcore 35 as core 12 on socket 1 00:06:42.501 EAL: Detected lcore 36 as core 13 on socket 1 00:06:42.501 EAL: Detected lcore 37 as core 16 on socket 1 00:06:42.501 EAL: Detected lcore 38 as core 17 on socket 1 00:06:42.501 EAL: Detected lcore 39 as core 18 on socket 1 00:06:42.501 EAL: Detected lcore 40 as core 19 on socket 1 00:06:42.501 EAL: Detected lcore 41 as core 20 on socket 1 00:06:42.501 EAL: Detected lcore 42 as core 21 on socket 1 00:06:42.501 EAL: Detected lcore 43 as core 25 on socket 1 00:06:42.501 EAL: Detected lcore 44 as core 26 on socket 1 00:06:42.501 EAL: Detected lcore 45 as core 27 on socket 1 00:06:42.501 EAL: Detected lcore 46 as core 28 on socket 1 00:06:42.501 EAL: Detected lcore 47 as core 29 on socket 1 00:06:42.501 EAL: Detected lcore 48 as core 0 on socket 0 00:06:42.501 EAL: Detected lcore 49 as core 1 on socket 0 00:06:42.501 EAL: Detected lcore 50 as core 2 on socket 0 00:06:42.501 EAL: Detected lcore 51 as core 3 on socket 0 00:06:42.501 EAL: Detected lcore 52 as core 4 on socket 0 00:06:42.501 EAL: Detected lcore 53 as core 5 on socket 0 00:06:42.501 EAL: Detected lcore 54 as core 6 on socket 0 00:06:42.501 EAL: Detected lcore 55 as core 8 on socket 0 00:06:42.501 EAL: Detected lcore 56 as core 9 on socket 0 00:06:42.501 EAL: Detected lcore 57 as core 10 on socket 0 00:06:42.501 EAL: Detected lcore 58 as core 11 on socket 0 00:06:42.501 EAL: Detected lcore 59 as core 12 on socket 0 00:06:42.501 EAL: Detected lcore 60 as core 13 on socket 0 00:06:42.501 EAL: Detected lcore 61 as core 16 on socket 0 00:06:42.501 EAL: Detected lcore 62 as core 17 on socket 0 00:06:42.501 EAL: Detected lcore 63 as core 18 on socket 0 00:06:42.501 EAL: Detected lcore 64 as core 19 on socket 0 00:06:42.501 EAL: Detected lcore 65 as core 20 on socket 0 00:06:42.501 EAL: Detected lcore 66 as core 21 on socket 0 00:06:42.501 EAL: Detected lcore 67 as core 25 on socket 0 00:06:42.501 EAL: Detected lcore 68 as core 26 on socket 0 00:06:42.501 EAL: Detected lcore 69 as core 27 on socket 0 00:06:42.501 EAL: Detected lcore 70 as core 28 on socket 0 00:06:42.501 EAL: Detected lcore 71 as core 29 on socket 0 00:06:42.501 EAL: Detected lcore 72 as core 0 on socket 1 00:06:42.501 EAL: Detected lcore 73 as core 1 on socket 1 00:06:42.501 EAL: Detected lcore 74 as core 2 on socket 1 00:06:42.501 EAL: Detected lcore 75 as core 3 on socket 1 00:06:42.501 EAL: Detected lcore 76 as core 4 on socket 1 00:06:42.501 EAL: Detected lcore 77 as core 5 on socket 1 00:06:42.501 EAL: Detected lcore 78 as core 6 on socket 1 00:06:42.501 EAL: Detected lcore 79 as core 8 on socket 1 00:06:42.501 EAL: Detected lcore 80 as core 9 on socket 1 00:06:42.501 EAL: Detected lcore 81 as core 10 on socket 1 00:06:42.501 EAL: Detected lcore 82 as core 11 on socket 1 00:06:42.501 EAL: Detected lcore 83 as core 12 on socket 1 00:06:42.501 EAL: Detected lcore 84 as core 13 on socket 1 00:06:42.501 EAL: Detected lcore 85 as core 16 on socket 1 00:06:42.501 EAL: Detected lcore 86 as core 17 on socket 1 00:06:42.501 EAL: Detected lcore 87 as core 18 on socket 1 00:06:42.501 EAL: Detected lcore 88 as core 19 on socket 1 00:06:42.501 EAL: Detected lcore 89 as core 20 on socket 1 00:06:42.501 EAL: Detected lcore 90 as core 21 on socket 1 00:06:42.501 EAL: Detected lcore 91 as core 25 on socket 1 00:06:42.501 EAL: Detected lcore 92 as core 26 on socket 1 00:06:42.501 EAL: Detected lcore 93 as core 27 on socket 1 00:06:42.501 EAL: Detected lcore 94 as core 28 on socket 1 00:06:42.501 EAL: Detected lcore 95 as core 29 on socket 1 00:06:42.501 EAL: Maximum logical cores by configuration: 128 00:06:42.501 EAL: Detected CPU lcores: 96 00:06:42.501 EAL: Detected NUMA nodes: 2 00:06:42.501 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:42.501 EAL: Detected shared linkage of DPDK 00:06:42.501 EAL: No shared files mode enabled, IPC will be disabled 00:06:42.501 EAL: Bus pci wants IOVA as 'DC' 00:06:42.501 EAL: Buses did not request a specific IOVA mode. 00:06:42.501 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:42.501 EAL: Selected IOVA mode 'VA' 00:06:42.502 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.502 EAL: Probing VFIO support... 00:06:42.502 EAL: IOMMU type 1 (Type 1) is supported 00:06:42.502 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:42.502 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:42.502 EAL: VFIO support initialized 00:06:42.502 EAL: Ask a virtual area of 0x2e000 bytes 00:06:42.502 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:42.502 EAL: Setting up physically contiguous memory... 00:06:42.502 EAL: Setting maximum number of open files to 524288 00:06:42.502 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:42.502 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:42.502 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:42.502 EAL: Ask a virtual area of 0x61000 bytes 00:06:42.502 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:42.502 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:42.502 EAL: Ask a virtual area of 0x400000000 bytes 00:06:42.502 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:42.502 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:42.502 EAL: Ask a virtual area of 0x61000 bytes 00:06:42.502 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:42.502 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:42.502 EAL: Ask a virtual area of 0x400000000 bytes 00:06:42.502 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:42.502 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:42.502 EAL: Ask a virtual area of 0x61000 bytes 00:06:42.502 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:42.502 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:42.502 EAL: Ask a virtual area of 0x400000000 bytes 00:06:42.502 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:42.502 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:42.502 EAL: Ask a virtual area of 0x61000 bytes 00:06:42.502 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:42.502 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:42.502 EAL: Ask a virtual area of 0x400000000 bytes 00:06:42.502 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:42.502 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:42.502 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:42.502 EAL: Ask a virtual area of 0x61000 bytes 00:06:42.502 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:42.502 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:42.502 EAL: Ask a virtual area of 0x400000000 bytes 00:06:42.502 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:42.502 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:42.502 EAL: Ask a virtual area of 0x61000 bytes 00:06:42.502 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:42.502 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:42.502 EAL: Ask a virtual area of 0x400000000 bytes 00:06:42.502 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:42.502 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:42.502 EAL: Ask a virtual area of 0x61000 bytes 00:06:42.502 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:42.502 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:42.502 EAL: Ask a virtual area of 0x400000000 bytes 00:06:42.502 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:42.502 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:42.502 EAL: Ask a virtual area of 0x61000 bytes 00:06:42.502 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:42.502 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:42.502 EAL: Ask a virtual area of 0x400000000 bytes 00:06:42.502 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:42.502 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:42.502 EAL: Hugepages will be freed exactly as allocated. 00:06:42.502 EAL: No shared files mode enabled, IPC is disabled 00:06:42.502 EAL: No shared files mode enabled, IPC is disabled 00:06:42.502 EAL: TSC frequency is ~2300000 KHz 00:06:42.502 EAL: Main lcore 0 is ready (tid=7f77e3891a00;cpuset=[0]) 00:06:42.502 EAL: Trying to obtain current memory policy. 00:06:42.502 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:42.502 EAL: Restoring previous memory policy: 0 00:06:42.502 EAL: request: mp_malloc_sync 00:06:42.502 EAL: No shared files mode enabled, IPC is disabled 00:06:42.502 EAL: Heap on socket 0 was expanded by 2MB 00:06:42.502 EAL: No shared files mode enabled, IPC is disabled 00:06:42.502 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:42.502 EAL: Mem event callback 'spdk:(nil)' registered 00:06:42.502 00:06:42.502 00:06:42.502 CUnit - A unit testing framework for C - Version 2.1-3 00:06:42.502 http://cunit.sourceforge.net/ 00:06:42.502 00:06:42.502 00:06:42.502 Suite: components_suite 00:06:42.502 Test: vtophys_malloc_test ...passed 00:06:42.502 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:42.502 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:42.502 EAL: Restoring previous memory policy: 4 00:06:42.502 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.502 EAL: request: mp_malloc_sync 00:06:42.502 EAL: No shared files mode enabled, IPC is disabled 00:06:42.502 EAL: Heap on socket 0 was expanded by 4MB 00:06:42.502 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.502 EAL: request: mp_malloc_sync 00:06:42.502 EAL: No shared files mode enabled, IPC is disabled 00:06:42.502 EAL: Heap on socket 0 was shrunk by 4MB 00:06:42.502 EAL: Trying to obtain current memory policy. 00:06:42.502 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:42.502 EAL: Restoring previous memory policy: 4 00:06:42.502 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.502 EAL: request: mp_malloc_sync 00:06:42.502 EAL: No shared files mode enabled, IPC is disabled 00:06:42.502 EAL: Heap on socket 0 was expanded by 6MB 00:06:42.502 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.502 EAL: request: mp_malloc_sync 00:06:42.502 EAL: No shared files mode enabled, IPC is disabled 00:06:42.502 EAL: Heap on socket 0 was shrunk by 6MB 00:06:42.502 EAL: Trying to obtain current memory policy. 00:06:42.502 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:42.502 EAL: Restoring previous memory policy: 4 00:06:42.502 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.502 EAL: request: mp_malloc_sync 00:06:42.502 EAL: No shared files mode enabled, IPC is disabled 00:06:42.502 EAL: Heap on socket 0 was expanded by 10MB 00:06:42.502 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.502 EAL: request: mp_malloc_sync 00:06:42.502 EAL: No shared files mode enabled, IPC is disabled 00:06:42.502 EAL: Heap on socket 0 was shrunk by 10MB 00:06:42.502 EAL: Trying to obtain current memory policy. 00:06:42.502 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:42.502 EAL: Restoring previous memory policy: 4 00:06:42.502 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.502 EAL: request: mp_malloc_sync 00:06:42.502 EAL: No shared files mode enabled, IPC is disabled 00:06:42.502 EAL: Heap on socket 0 was expanded by 18MB 00:06:42.502 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.502 EAL: request: mp_malloc_sync 00:06:42.502 EAL: No shared files mode enabled, IPC is disabled 00:06:42.502 EAL: Heap on socket 0 was shrunk by 18MB 00:06:42.502 EAL: Trying to obtain current memory policy. 00:06:42.502 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:42.502 EAL: Restoring previous memory policy: 4 00:06:42.502 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.502 EAL: request: mp_malloc_sync 00:06:42.502 EAL: No shared files mode enabled, IPC is disabled 00:06:42.502 EAL: Heap on socket 0 was expanded by 34MB 00:06:42.502 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.502 EAL: request: mp_malloc_sync 00:06:42.502 EAL: No shared files mode enabled, IPC is disabled 00:06:42.502 EAL: Heap on socket 0 was shrunk by 34MB 00:06:42.502 EAL: Trying to obtain current memory policy. 00:06:42.502 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:42.502 EAL: Restoring previous memory policy: 4 00:06:42.502 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.502 EAL: request: mp_malloc_sync 00:06:42.502 EAL: No shared files mode enabled, IPC is disabled 00:06:42.502 EAL: Heap on socket 0 was expanded by 66MB 00:06:42.502 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.502 EAL: request: mp_malloc_sync 00:06:42.502 EAL: No shared files mode enabled, IPC is disabled 00:06:42.502 EAL: Heap on socket 0 was shrunk by 66MB 00:06:42.502 EAL: Trying to obtain current memory policy. 00:06:42.502 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:42.502 EAL: Restoring previous memory policy: 4 00:06:42.502 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.502 EAL: request: mp_malloc_sync 00:06:42.502 EAL: No shared files mode enabled, IPC is disabled 00:06:42.502 EAL: Heap on socket 0 was expanded by 130MB 00:06:42.502 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.502 EAL: request: mp_malloc_sync 00:06:42.502 EAL: No shared files mode enabled, IPC is disabled 00:06:42.502 EAL: Heap on socket 0 was shrunk by 130MB 00:06:42.502 EAL: Trying to obtain current memory policy. 00:06:42.502 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:42.762 EAL: Restoring previous memory policy: 4 00:06:42.762 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.762 EAL: request: mp_malloc_sync 00:06:42.762 EAL: No shared files mode enabled, IPC is disabled 00:06:42.762 EAL: Heap on socket 0 was expanded by 258MB 00:06:42.762 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.762 EAL: request: mp_malloc_sync 00:06:42.762 EAL: No shared files mode enabled, IPC is disabled 00:06:42.762 EAL: Heap on socket 0 was shrunk by 258MB 00:06:42.762 EAL: Trying to obtain current memory policy. 00:06:42.762 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:42.762 EAL: Restoring previous memory policy: 4 00:06:42.762 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.762 EAL: request: mp_malloc_sync 00:06:42.762 EAL: No shared files mode enabled, IPC is disabled 00:06:42.762 EAL: Heap on socket 0 was expanded by 514MB 00:06:43.022 EAL: Calling mem event callback 'spdk:(nil)' 00:06:43.022 EAL: request: mp_malloc_sync 00:06:43.022 EAL: No shared files mode enabled, IPC is disabled 00:06:43.022 EAL: Heap on socket 0 was shrunk by 514MB 00:06:43.022 EAL: Trying to obtain current memory policy. 00:06:43.022 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:43.283 EAL: Restoring previous memory policy: 4 00:06:43.283 EAL: Calling mem event callback 'spdk:(nil)' 00:06:43.283 EAL: request: mp_malloc_sync 00:06:43.283 EAL: No shared files mode enabled, IPC is disabled 00:06:43.283 EAL: Heap on socket 0 was expanded by 1026MB 00:06:43.283 EAL: Calling mem event callback 'spdk:(nil)' 00:06:43.543 EAL: request: mp_malloc_sync 00:06:43.543 EAL: No shared files mode enabled, IPC is disabled 00:06:43.543 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:43.543 passed 00:06:43.543 00:06:43.543 Run Summary: Type Total Ran Passed Failed Inactive 00:06:43.543 suites 1 1 n/a 0 0 00:06:43.543 tests 2 2 2 0 0 00:06:43.543 asserts 497 497 497 0 n/a 00:06:43.543 00:06:43.543 Elapsed time = 0.979 seconds 00:06:43.543 EAL: Calling mem event callback 'spdk:(nil)' 00:06:43.543 EAL: request: mp_malloc_sync 00:06:43.543 EAL: No shared files mode enabled, IPC is disabled 00:06:43.543 EAL: Heap on socket 0 was shrunk by 2MB 00:06:43.543 EAL: No shared files mode enabled, IPC is disabled 00:06:43.543 EAL: No shared files mode enabled, IPC is disabled 00:06:43.543 EAL: No shared files mode enabled, IPC is disabled 00:06:43.543 00:06:43.543 real 0m1.108s 00:06:43.543 user 0m0.640s 00:06:43.543 sys 0m0.439s 00:06:43.543 18:57:35 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.543 18:57:35 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:43.543 ************************************ 00:06:43.543 END TEST env_vtophys 00:06:43.543 ************************************ 00:06:43.543 18:57:35 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:06:43.543 18:57:35 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.543 18:57:35 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.543 18:57:35 env -- common/autotest_common.sh@10 -- # set +x 00:06:43.543 ************************************ 00:06:43.543 START TEST env_pci 00:06:43.543 ************************************ 00:06:43.543 18:57:35 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:06:43.543 00:06:43.543 00:06:43.543 CUnit - A unit testing framework for C - Version 2.1-3 00:06:43.543 http://cunit.sourceforge.net/ 00:06:43.543 00:06:43.543 00:06:43.543 Suite: pci 00:06:43.543 Test: pci_hook ...[2024-07-25 18:57:35.946970] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 603598 has claimed it 00:06:43.543 EAL: Cannot find device (10000:00:01.0) 00:06:43.543 EAL: Failed to attach device on primary process 00:06:43.543 passed 00:06:43.543 00:06:43.543 Run Summary: Type Total Ran Passed Failed Inactive 00:06:43.543 suites 1 1 n/a 0 0 00:06:43.543 tests 1 1 1 0 0 00:06:43.543 asserts 25 25 25 0 n/a 00:06:43.543 00:06:43.543 Elapsed time = 0.026 seconds 00:06:43.543 00:06:43.543 real 0m0.046s 00:06:43.543 user 0m0.014s 00:06:43.543 sys 0m0.032s 00:06:43.543 18:57:35 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.543 18:57:35 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:43.543 ************************************ 00:06:43.543 END TEST env_pci 00:06:43.543 ************************************ 00:06:43.543 18:57:36 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:43.543 18:57:36 env -- env/env.sh@15 -- # uname 00:06:43.803 18:57:36 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:43.803 18:57:36 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:43.803 18:57:36 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:43.803 18:57:36 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:43.803 18:57:36 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.803 18:57:36 env -- common/autotest_common.sh@10 -- # set +x 00:06:43.803 ************************************ 00:06:43.803 START TEST env_dpdk_post_init 00:06:43.803 ************************************ 00:06:43.803 18:57:36 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:43.803 EAL: Detected CPU lcores: 96 00:06:43.803 EAL: Detected NUMA nodes: 2 00:06:43.803 EAL: Detected shared linkage of DPDK 00:06:43.803 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:43.803 EAL: Selected IOVA mode 'VA' 00:06:43.804 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.804 EAL: VFIO support initialized 00:06:43.804 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:43.804 EAL: Using IOMMU type 1 (Type 1) 00:06:43.804 EAL: Ignore mapping IO port bar(1) 00:06:43.804 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:06:43.804 EAL: Ignore mapping IO port bar(1) 00:06:43.804 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:06:43.804 EAL: Ignore mapping IO port bar(1) 00:06:43.804 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:06:43.804 EAL: Ignore mapping IO port bar(1) 00:06:43.804 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:06:43.804 EAL: Ignore mapping IO port bar(1) 00:06:43.804 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:06:43.804 EAL: Ignore mapping IO port bar(1) 00:06:43.804 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:06:43.804 EAL: Ignore mapping IO port bar(1) 00:06:43.804 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:06:44.064 EAL: Ignore mapping IO port bar(1) 00:06:44.064 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:06:44.633 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:06:44.633 EAL: Ignore mapping IO port bar(1) 00:06:44.633 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:06:44.633 EAL: Ignore mapping IO port bar(1) 00:06:44.633 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:06:44.633 EAL: Ignore mapping IO port bar(1) 00:06:44.633 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:06:44.633 EAL: Ignore mapping IO port bar(1) 00:06:44.633 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:06:44.633 EAL: Ignore mapping IO port bar(1) 00:06:44.633 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:06:44.633 EAL: Ignore mapping IO port bar(1) 00:06:44.633 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:06:44.633 EAL: Ignore mapping IO port bar(1) 00:06:44.633 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:06:44.633 EAL: Ignore mapping IO port bar(1) 00:06:44.633 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:06:47.920 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:06:47.920 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:06:48.180 Starting DPDK initialization... 00:06:48.180 Starting SPDK post initialization... 00:06:48.180 SPDK NVMe probe 00:06:48.180 Attaching to 0000:5e:00.0 00:06:48.180 Attached to 0000:5e:00.0 00:06:48.180 Cleaning up... 00:06:48.180 00:06:48.180 real 0m4.361s 00:06:48.180 user 0m3.265s 00:06:48.180 sys 0m0.172s 00:06:48.180 18:57:40 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.180 18:57:40 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:48.180 ************************************ 00:06:48.180 END TEST env_dpdk_post_init 00:06:48.180 ************************************ 00:06:48.180 18:57:40 env -- env/env.sh@26 -- # uname 00:06:48.180 18:57:40 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:48.180 18:57:40 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:48.180 18:57:40 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.180 18:57:40 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.180 18:57:40 env -- common/autotest_common.sh@10 -- # set +x 00:06:48.180 ************************************ 00:06:48.180 START TEST env_mem_callbacks 00:06:48.180 ************************************ 00:06:48.180 18:57:40 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:48.180 EAL: Detected CPU lcores: 96 00:06:48.180 EAL: Detected NUMA nodes: 2 00:06:48.180 EAL: Detected shared linkage of DPDK 00:06:48.180 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:48.180 EAL: Selected IOVA mode 'VA' 00:06:48.180 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.180 EAL: VFIO support initialized 00:06:48.180 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:48.180 00:06:48.180 00:06:48.180 CUnit - A unit testing framework for C - Version 2.1-3 00:06:48.180 http://cunit.sourceforge.net/ 00:06:48.180 00:06:48.180 00:06:48.180 Suite: memory 00:06:48.180 Test: test ... 00:06:48.180 register 0x200000200000 2097152 00:06:48.180 malloc 3145728 00:06:48.180 register 0x200000400000 4194304 00:06:48.180 buf 0x200000500000 len 3145728 PASSED 00:06:48.180 malloc 64 00:06:48.180 buf 0x2000004fff40 len 64 PASSED 00:06:48.180 malloc 4194304 00:06:48.180 register 0x200000800000 6291456 00:06:48.180 buf 0x200000a00000 len 4194304 PASSED 00:06:48.180 free 0x200000500000 3145728 00:06:48.180 free 0x2000004fff40 64 00:06:48.180 unregister 0x200000400000 4194304 PASSED 00:06:48.180 free 0x200000a00000 4194304 00:06:48.180 unregister 0x200000800000 6291456 PASSED 00:06:48.180 malloc 8388608 00:06:48.180 register 0x200000400000 10485760 00:06:48.180 buf 0x200000600000 len 8388608 PASSED 00:06:48.180 free 0x200000600000 8388608 00:06:48.180 unregister 0x200000400000 10485760 PASSED 00:06:48.180 passed 00:06:48.180 00:06:48.180 Run Summary: Type Total Ran Passed Failed Inactive 00:06:48.180 suites 1 1 n/a 0 0 00:06:48.180 tests 1 1 1 0 0 00:06:48.180 asserts 15 15 15 0 n/a 00:06:48.180 00:06:48.180 Elapsed time = 0.008 seconds 00:06:48.180 00:06:48.180 real 0m0.057s 00:06:48.180 user 0m0.021s 00:06:48.180 sys 0m0.036s 00:06:48.180 18:57:40 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.180 18:57:40 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:48.180 ************************************ 00:06:48.180 END TEST env_mem_callbacks 00:06:48.180 ************************************ 00:06:48.180 00:06:48.180 real 0m6.170s 00:06:48.180 user 0m4.262s 00:06:48.180 sys 0m0.983s 00:06:48.180 18:57:40 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.180 18:57:40 env -- common/autotest_common.sh@10 -- # set +x 00:06:48.180 ************************************ 00:06:48.180 END TEST env 00:06:48.180 ************************************ 00:06:48.180 18:57:40 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:06:48.180 18:57:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.180 18:57:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.180 18:57:40 -- common/autotest_common.sh@10 -- # set +x 00:06:48.440 ************************************ 00:06:48.440 START TEST rpc 00:06:48.440 ************************************ 00:06:48.440 18:57:40 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:06:48.440 * Looking for test storage... 00:06:48.440 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:48.440 18:57:40 rpc -- rpc/rpc.sh@65 -- # spdk_pid=604431 00:06:48.440 18:57:40 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:48.440 18:57:40 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:48.440 18:57:40 rpc -- rpc/rpc.sh@67 -- # waitforlisten 604431 00:06:48.440 18:57:40 rpc -- common/autotest_common.sh@831 -- # '[' -z 604431 ']' 00:06:48.440 18:57:40 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.440 18:57:40 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.440 18:57:40 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.440 18:57:40 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.440 18:57:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.440 [2024-07-25 18:57:40.794767] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:48.440 [2024-07-25 18:57:40.794818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid604431 ] 00:06:48.440 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.440 [2024-07-25 18:57:40.865321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.699 [2024-07-25 18:57:40.939166] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:48.700 [2024-07-25 18:57:40.939199] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 604431' to capture a snapshot of events at runtime. 00:06:48.700 [2024-07-25 18:57:40.939206] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:48.700 [2024-07-25 18:57:40.939212] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:48.700 [2024-07-25 18:57:40.939218] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid604431 for offline analysis/debug. 00:06:48.700 [2024-07-25 18:57:40.939236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.267 18:57:41 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.267 18:57:41 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:49.268 18:57:41 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:49.268 18:57:41 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:49.268 18:57:41 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:49.268 18:57:41 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:49.268 18:57:41 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.268 18:57:41 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.268 18:57:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.268 ************************************ 00:06:49.268 START TEST rpc_integrity 00:06:49.268 ************************************ 00:06:49.268 18:57:41 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:49.268 18:57:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:49.268 18:57:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.268 18:57:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.268 18:57:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.268 18:57:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:49.268 18:57:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:49.268 18:57:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:49.268 18:57:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:49.268 18:57:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.268 18:57:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.268 18:57:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.268 18:57:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:49.268 18:57:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:49.268 18:57:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.268 18:57:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.527 18:57:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.527 18:57:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:49.527 { 00:06:49.527 "name": "Malloc0", 00:06:49.527 "aliases": [ 00:06:49.527 "48da7e5b-a0d6-4fb7-b009-eb0f29dda386" 00:06:49.527 ], 00:06:49.527 "product_name": "Malloc disk", 00:06:49.527 "block_size": 512, 00:06:49.527 "num_blocks": 16384, 00:06:49.527 "uuid": "48da7e5b-a0d6-4fb7-b009-eb0f29dda386", 00:06:49.527 "assigned_rate_limits": { 00:06:49.527 "rw_ios_per_sec": 0, 00:06:49.527 "rw_mbytes_per_sec": 0, 00:06:49.527 "r_mbytes_per_sec": 0, 00:06:49.527 "w_mbytes_per_sec": 0 00:06:49.527 }, 00:06:49.527 "claimed": false, 00:06:49.527 "zoned": false, 00:06:49.527 "supported_io_types": { 00:06:49.527 "read": true, 00:06:49.527 "write": true, 00:06:49.527 "unmap": true, 00:06:49.527 "flush": true, 00:06:49.527 "reset": true, 00:06:49.527 "nvme_admin": false, 00:06:49.527 "nvme_io": false, 00:06:49.527 "nvme_io_md": false, 00:06:49.527 "write_zeroes": true, 00:06:49.527 "zcopy": true, 00:06:49.527 "get_zone_info": false, 00:06:49.527 "zone_management": false, 00:06:49.527 "zone_append": false, 00:06:49.527 "compare": false, 00:06:49.527 "compare_and_write": false, 00:06:49.527 "abort": true, 00:06:49.527 "seek_hole": false, 00:06:49.527 "seek_data": false, 00:06:49.527 "copy": true, 00:06:49.527 "nvme_iov_md": false 00:06:49.527 }, 00:06:49.527 "memory_domains": [ 00:06:49.527 { 00:06:49.527 "dma_device_id": "system", 00:06:49.527 "dma_device_type": 1 00:06:49.527 }, 00:06:49.527 { 00:06:49.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:49.527 "dma_device_type": 2 00:06:49.527 } 00:06:49.527 ], 00:06:49.527 "driver_specific": {} 00:06:49.527 } 00:06:49.527 ]' 00:06:49.527 18:57:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:49.527 18:57:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:49.527 18:57:41 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:49.527 18:57:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.527 18:57:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.527 [2024-07-25 18:57:41.788371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:49.527 [2024-07-25 18:57:41.788400] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:49.527 [2024-07-25 18:57:41.788412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22ae3c0 00:06:49.527 [2024-07-25 18:57:41.788419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:49.527 [2024-07-25 18:57:41.789428] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:49.527 [2024-07-25 18:57:41.789449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:49.527 Passthru0 00:06:49.527 18:57:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.527 18:57:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:49.527 18:57:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.527 18:57:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.527 18:57:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.527 18:57:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:49.527 { 00:06:49.527 "name": "Malloc0", 00:06:49.527 "aliases": [ 00:06:49.527 "48da7e5b-a0d6-4fb7-b009-eb0f29dda386" 00:06:49.527 ], 00:06:49.527 "product_name": "Malloc disk", 00:06:49.527 "block_size": 512, 00:06:49.527 "num_blocks": 16384, 00:06:49.527 "uuid": "48da7e5b-a0d6-4fb7-b009-eb0f29dda386", 00:06:49.527 "assigned_rate_limits": { 00:06:49.527 "rw_ios_per_sec": 0, 00:06:49.527 "rw_mbytes_per_sec": 0, 00:06:49.527 "r_mbytes_per_sec": 0, 00:06:49.527 "w_mbytes_per_sec": 0 00:06:49.527 }, 00:06:49.527 "claimed": true, 00:06:49.527 "claim_type": "exclusive_write", 00:06:49.527 "zoned": false, 00:06:49.527 "supported_io_types": { 00:06:49.527 "read": true, 00:06:49.527 "write": true, 00:06:49.527 "unmap": true, 00:06:49.527 "flush": true, 00:06:49.527 "reset": true, 00:06:49.527 "nvme_admin": false, 00:06:49.527 "nvme_io": false, 00:06:49.527 "nvme_io_md": false, 00:06:49.527 "write_zeroes": true, 00:06:49.527 "zcopy": true, 00:06:49.527 "get_zone_info": false, 00:06:49.527 "zone_management": false, 00:06:49.527 "zone_append": false, 00:06:49.527 "compare": false, 00:06:49.527 "compare_and_write": false, 00:06:49.527 "abort": true, 00:06:49.527 "seek_hole": false, 00:06:49.527 "seek_data": false, 00:06:49.527 "copy": true, 00:06:49.527 "nvme_iov_md": false 00:06:49.527 }, 00:06:49.527 "memory_domains": [ 00:06:49.527 { 00:06:49.527 "dma_device_id": "system", 00:06:49.527 "dma_device_type": 1 00:06:49.527 }, 00:06:49.527 { 00:06:49.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:49.527 "dma_device_type": 2 00:06:49.527 } 00:06:49.527 ], 00:06:49.528 "driver_specific": {} 00:06:49.528 }, 00:06:49.528 { 00:06:49.528 "name": "Passthru0", 00:06:49.528 "aliases": [ 00:06:49.528 "e2c43085-f29f-5799-9cd0-9139056a2417" 00:06:49.528 ], 00:06:49.528 "product_name": "passthru", 00:06:49.528 "block_size": 512, 00:06:49.528 "num_blocks": 16384, 00:06:49.528 "uuid": "e2c43085-f29f-5799-9cd0-9139056a2417", 00:06:49.528 "assigned_rate_limits": { 00:06:49.528 "rw_ios_per_sec": 0, 00:06:49.528 "rw_mbytes_per_sec": 0, 00:06:49.528 "r_mbytes_per_sec": 0, 00:06:49.528 "w_mbytes_per_sec": 0 00:06:49.528 }, 00:06:49.528 "claimed": false, 00:06:49.528 "zoned": false, 00:06:49.528 "supported_io_types": { 00:06:49.528 "read": true, 00:06:49.528 "write": true, 00:06:49.528 "unmap": true, 00:06:49.528 "flush": true, 00:06:49.528 "reset": true, 00:06:49.528 "nvme_admin": false, 00:06:49.528 "nvme_io": false, 00:06:49.528 "nvme_io_md": false, 00:06:49.528 "write_zeroes": true, 00:06:49.528 "zcopy": true, 00:06:49.528 "get_zone_info": false, 00:06:49.528 "zone_management": false, 00:06:49.528 "zone_append": false, 00:06:49.528 "compare": false, 00:06:49.528 "compare_and_write": false, 00:06:49.528 "abort": true, 00:06:49.528 "seek_hole": false, 00:06:49.528 "seek_data": false, 00:06:49.528 "copy": true, 00:06:49.528 "nvme_iov_md": false 00:06:49.528 }, 00:06:49.528 "memory_domains": [ 00:06:49.528 { 00:06:49.528 "dma_device_id": "system", 00:06:49.528 "dma_device_type": 1 00:06:49.528 }, 00:06:49.528 { 00:06:49.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:49.528 "dma_device_type": 2 00:06:49.528 } 00:06:49.528 ], 00:06:49.528 "driver_specific": { 00:06:49.528 "passthru": { 00:06:49.528 "name": "Passthru0", 00:06:49.528 "base_bdev_name": "Malloc0" 00:06:49.528 } 00:06:49.528 } 00:06:49.528 } 00:06:49.528 ]' 00:06:49.528 18:57:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:49.528 18:57:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:49.528 18:57:41 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:49.528 18:57:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.528 18:57:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.528 18:57:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.528 18:57:41 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:49.528 18:57:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.528 18:57:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.528 18:57:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.528 18:57:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:49.528 18:57:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.528 18:57:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.528 18:57:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.528 18:57:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:49.528 18:57:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:49.528 18:57:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:49.528 00:06:49.528 real 0m0.275s 00:06:49.528 user 0m0.169s 00:06:49.528 sys 0m0.040s 00:06:49.528 18:57:41 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.528 18:57:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:49.528 ************************************ 00:06:49.528 END TEST rpc_integrity 00:06:49.528 ************************************ 00:06:49.528 18:57:41 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:49.528 18:57:41 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.528 18:57:41 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.528 18:57:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.787 ************************************ 00:06:49.787 START TEST rpc_plugins 00:06:49.787 ************************************ 00:06:49.787 18:57:41 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:49.787 18:57:41 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:49.787 18:57:42 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.787 18:57:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:49.787 18:57:42 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.787 18:57:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:49.787 18:57:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:49.787 18:57:42 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.787 18:57:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:49.787 18:57:42 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.787 18:57:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:49.787 { 00:06:49.787 "name": "Malloc1", 00:06:49.787 "aliases": [ 00:06:49.787 "6cf94ec7-1968-4c95-a0e1-241083446e28" 00:06:49.787 ], 00:06:49.787 "product_name": "Malloc disk", 00:06:49.787 "block_size": 4096, 00:06:49.787 "num_blocks": 256, 00:06:49.787 "uuid": "6cf94ec7-1968-4c95-a0e1-241083446e28", 00:06:49.787 "assigned_rate_limits": { 00:06:49.787 "rw_ios_per_sec": 0, 00:06:49.787 "rw_mbytes_per_sec": 0, 00:06:49.787 "r_mbytes_per_sec": 0, 00:06:49.787 "w_mbytes_per_sec": 0 00:06:49.787 }, 00:06:49.787 "claimed": false, 00:06:49.787 "zoned": false, 00:06:49.787 "supported_io_types": { 00:06:49.787 "read": true, 00:06:49.787 "write": true, 00:06:49.787 "unmap": true, 00:06:49.787 "flush": true, 00:06:49.787 "reset": true, 00:06:49.787 "nvme_admin": false, 00:06:49.787 "nvme_io": false, 00:06:49.787 "nvme_io_md": false, 00:06:49.787 "write_zeroes": true, 00:06:49.787 "zcopy": true, 00:06:49.787 "get_zone_info": false, 00:06:49.787 "zone_management": false, 00:06:49.787 "zone_append": false, 00:06:49.787 "compare": false, 00:06:49.787 "compare_and_write": false, 00:06:49.787 "abort": true, 00:06:49.787 "seek_hole": false, 00:06:49.787 "seek_data": false, 00:06:49.787 "copy": true, 00:06:49.787 "nvme_iov_md": false 00:06:49.787 }, 00:06:49.787 "memory_domains": [ 00:06:49.787 { 00:06:49.787 "dma_device_id": "system", 00:06:49.787 "dma_device_type": 1 00:06:49.787 }, 00:06:49.787 { 00:06:49.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:49.787 "dma_device_type": 2 00:06:49.787 } 00:06:49.787 ], 00:06:49.787 "driver_specific": {} 00:06:49.787 } 00:06:49.787 ]' 00:06:49.787 18:57:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:49.787 18:57:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:49.787 18:57:42 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:49.787 18:57:42 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.787 18:57:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:49.787 18:57:42 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.787 18:57:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:49.787 18:57:42 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.787 18:57:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:49.787 18:57:42 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.787 18:57:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:49.787 18:57:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:49.787 18:57:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:49.787 00:06:49.787 real 0m0.139s 00:06:49.787 user 0m0.089s 00:06:49.787 sys 0m0.017s 00:06:49.787 18:57:42 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.787 18:57:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:49.787 ************************************ 00:06:49.787 END TEST rpc_plugins 00:06:49.787 ************************************ 00:06:49.787 18:57:42 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:49.787 18:57:42 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.787 18:57:42 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.787 18:57:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.787 ************************************ 00:06:49.787 START TEST rpc_trace_cmd_test 00:06:49.787 ************************************ 00:06:49.787 18:57:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:49.787 18:57:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:49.787 18:57:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:49.787 18:57:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.788 18:57:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.788 18:57:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.788 18:57:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:49.788 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid604431", 00:06:49.788 "tpoint_group_mask": "0x8", 00:06:49.788 "iscsi_conn": { 00:06:49.788 "mask": "0x2", 00:06:49.788 "tpoint_mask": "0x0" 00:06:49.788 }, 00:06:49.788 "scsi": { 00:06:49.788 "mask": "0x4", 00:06:49.788 "tpoint_mask": "0x0" 00:06:49.788 }, 00:06:49.788 "bdev": { 00:06:49.788 "mask": "0x8", 00:06:49.788 "tpoint_mask": "0xffffffffffffffff" 00:06:49.788 }, 00:06:49.788 "nvmf_rdma": { 00:06:49.788 "mask": "0x10", 00:06:49.788 "tpoint_mask": "0x0" 00:06:49.788 }, 00:06:49.788 "nvmf_tcp": { 00:06:49.788 "mask": "0x20", 00:06:49.788 "tpoint_mask": "0x0" 00:06:49.788 }, 00:06:49.788 "ftl": { 00:06:49.788 "mask": "0x40", 00:06:49.788 "tpoint_mask": "0x0" 00:06:49.788 }, 00:06:49.788 "blobfs": { 00:06:49.788 "mask": "0x80", 00:06:49.788 "tpoint_mask": "0x0" 00:06:49.788 }, 00:06:49.788 "dsa": { 00:06:49.788 "mask": "0x200", 00:06:49.788 "tpoint_mask": "0x0" 00:06:49.788 }, 00:06:49.788 "thread": { 00:06:49.788 "mask": "0x400", 00:06:49.788 "tpoint_mask": "0x0" 00:06:49.788 }, 00:06:49.788 "nvme_pcie": { 00:06:49.788 "mask": "0x800", 00:06:49.788 "tpoint_mask": "0x0" 00:06:49.788 }, 00:06:49.788 "iaa": { 00:06:49.788 "mask": "0x1000", 00:06:49.788 "tpoint_mask": "0x0" 00:06:49.788 }, 00:06:49.788 "nvme_tcp": { 00:06:49.788 "mask": "0x2000", 00:06:49.788 "tpoint_mask": "0x0" 00:06:49.788 }, 00:06:49.788 "bdev_nvme": { 00:06:49.788 "mask": "0x4000", 00:06:49.788 "tpoint_mask": "0x0" 00:06:49.788 }, 00:06:49.788 "sock": { 00:06:49.788 "mask": "0x8000", 00:06:49.788 "tpoint_mask": "0x0" 00:06:49.788 } 00:06:49.788 }' 00:06:49.788 18:57:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:50.047 18:57:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:50.047 18:57:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:50.047 18:57:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:50.047 18:57:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:50.047 18:57:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:50.047 18:57:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:50.047 18:57:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:50.047 18:57:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:50.047 18:57:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:50.047 00:06:50.047 real 0m0.225s 00:06:50.047 user 0m0.198s 00:06:50.047 sys 0m0.019s 00:06:50.047 18:57:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.047 18:57:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.047 ************************************ 00:06:50.047 END TEST rpc_trace_cmd_test 00:06:50.047 ************************************ 00:06:50.047 18:57:42 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:50.047 18:57:42 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:50.047 18:57:42 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:50.047 18:57:42 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:50.047 18:57:42 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.047 18:57:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.047 ************************************ 00:06:50.047 START TEST rpc_daemon_integrity 00:06:50.047 ************************************ 00:06:50.047 18:57:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:50.047 18:57:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:50.047 18:57:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.047 18:57:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:50.047 18:57:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.047 18:57:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:50.047 18:57:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:50.306 18:57:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:50.307 { 00:06:50.307 "name": "Malloc2", 00:06:50.307 "aliases": [ 00:06:50.307 "8f5aaffa-b710-4acb-b572-ef1cd49d93ea" 00:06:50.307 ], 00:06:50.307 "product_name": "Malloc disk", 00:06:50.307 "block_size": 512, 00:06:50.307 "num_blocks": 16384, 00:06:50.307 "uuid": "8f5aaffa-b710-4acb-b572-ef1cd49d93ea", 00:06:50.307 "assigned_rate_limits": { 00:06:50.307 "rw_ios_per_sec": 0, 00:06:50.307 "rw_mbytes_per_sec": 0, 00:06:50.307 "r_mbytes_per_sec": 0, 00:06:50.307 "w_mbytes_per_sec": 0 00:06:50.307 }, 00:06:50.307 "claimed": false, 00:06:50.307 "zoned": false, 00:06:50.307 "supported_io_types": { 00:06:50.307 "read": true, 00:06:50.307 "write": true, 00:06:50.307 "unmap": true, 00:06:50.307 "flush": true, 00:06:50.307 "reset": true, 00:06:50.307 "nvme_admin": false, 00:06:50.307 "nvme_io": false, 00:06:50.307 "nvme_io_md": false, 00:06:50.307 "write_zeroes": true, 00:06:50.307 "zcopy": true, 00:06:50.307 "get_zone_info": false, 00:06:50.307 "zone_management": false, 00:06:50.307 "zone_append": false, 00:06:50.307 "compare": false, 00:06:50.307 "compare_and_write": false, 00:06:50.307 "abort": true, 00:06:50.307 "seek_hole": false, 00:06:50.307 "seek_data": false, 00:06:50.307 "copy": true, 00:06:50.307 "nvme_iov_md": false 00:06:50.307 }, 00:06:50.307 "memory_domains": [ 00:06:50.307 { 00:06:50.307 "dma_device_id": "system", 00:06:50.307 "dma_device_type": 1 00:06:50.307 }, 00:06:50.307 { 00:06:50.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:50.307 "dma_device_type": 2 00:06:50.307 } 00:06:50.307 ], 00:06:50.307 "driver_specific": {} 00:06:50.307 } 00:06:50.307 ]' 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:50.307 [2024-07-25 18:57:42.634652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:50.307 [2024-07-25 18:57:42.634683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:50.307 [2024-07-25 18:57:42.634698] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22aee00 00:06:50.307 [2024-07-25 18:57:42.634705] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:50.307 [2024-07-25 18:57:42.635654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:50.307 [2024-07-25 18:57:42.635675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:50.307 Passthru0 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:50.307 { 00:06:50.307 "name": "Malloc2", 00:06:50.307 "aliases": [ 00:06:50.307 "8f5aaffa-b710-4acb-b572-ef1cd49d93ea" 00:06:50.307 ], 00:06:50.307 "product_name": "Malloc disk", 00:06:50.307 "block_size": 512, 00:06:50.307 "num_blocks": 16384, 00:06:50.307 "uuid": "8f5aaffa-b710-4acb-b572-ef1cd49d93ea", 00:06:50.307 "assigned_rate_limits": { 00:06:50.307 "rw_ios_per_sec": 0, 00:06:50.307 "rw_mbytes_per_sec": 0, 00:06:50.307 "r_mbytes_per_sec": 0, 00:06:50.307 "w_mbytes_per_sec": 0 00:06:50.307 }, 00:06:50.307 "claimed": true, 00:06:50.307 "claim_type": "exclusive_write", 00:06:50.307 "zoned": false, 00:06:50.307 "supported_io_types": { 00:06:50.307 "read": true, 00:06:50.307 "write": true, 00:06:50.307 "unmap": true, 00:06:50.307 "flush": true, 00:06:50.307 "reset": true, 00:06:50.307 "nvme_admin": false, 00:06:50.307 "nvme_io": false, 00:06:50.307 "nvme_io_md": false, 00:06:50.307 "write_zeroes": true, 00:06:50.307 "zcopy": true, 00:06:50.307 "get_zone_info": false, 00:06:50.307 "zone_management": false, 00:06:50.307 "zone_append": false, 00:06:50.307 "compare": false, 00:06:50.307 "compare_and_write": false, 00:06:50.307 "abort": true, 00:06:50.307 "seek_hole": false, 00:06:50.307 "seek_data": false, 00:06:50.307 "copy": true, 00:06:50.307 "nvme_iov_md": false 00:06:50.307 }, 00:06:50.307 "memory_domains": [ 00:06:50.307 { 00:06:50.307 "dma_device_id": "system", 00:06:50.307 "dma_device_type": 1 00:06:50.307 }, 00:06:50.307 { 00:06:50.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:50.307 "dma_device_type": 2 00:06:50.307 } 00:06:50.307 ], 00:06:50.307 "driver_specific": {} 00:06:50.307 }, 00:06:50.307 { 00:06:50.307 "name": "Passthru0", 00:06:50.307 "aliases": [ 00:06:50.307 "03964e14-655b-5094-8582-92ead9299f9b" 00:06:50.307 ], 00:06:50.307 "product_name": "passthru", 00:06:50.307 "block_size": 512, 00:06:50.307 "num_blocks": 16384, 00:06:50.307 "uuid": "03964e14-655b-5094-8582-92ead9299f9b", 00:06:50.307 "assigned_rate_limits": { 00:06:50.307 "rw_ios_per_sec": 0, 00:06:50.307 "rw_mbytes_per_sec": 0, 00:06:50.307 "r_mbytes_per_sec": 0, 00:06:50.307 "w_mbytes_per_sec": 0 00:06:50.307 }, 00:06:50.307 "claimed": false, 00:06:50.307 "zoned": false, 00:06:50.307 "supported_io_types": { 00:06:50.307 "read": true, 00:06:50.307 "write": true, 00:06:50.307 "unmap": true, 00:06:50.307 "flush": true, 00:06:50.307 "reset": true, 00:06:50.307 "nvme_admin": false, 00:06:50.307 "nvme_io": false, 00:06:50.307 "nvme_io_md": false, 00:06:50.307 "write_zeroes": true, 00:06:50.307 "zcopy": true, 00:06:50.307 "get_zone_info": false, 00:06:50.307 "zone_management": false, 00:06:50.307 "zone_append": false, 00:06:50.307 "compare": false, 00:06:50.307 "compare_and_write": false, 00:06:50.307 "abort": true, 00:06:50.307 "seek_hole": false, 00:06:50.307 "seek_data": false, 00:06:50.307 "copy": true, 00:06:50.307 "nvme_iov_md": false 00:06:50.307 }, 00:06:50.307 "memory_domains": [ 00:06:50.307 { 00:06:50.307 "dma_device_id": "system", 00:06:50.307 "dma_device_type": 1 00:06:50.307 }, 00:06:50.307 { 00:06:50.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:50.307 "dma_device_type": 2 00:06:50.307 } 00:06:50.307 ], 00:06:50.307 "driver_specific": { 00:06:50.307 "passthru": { 00:06:50.307 "name": "Passthru0", 00:06:50.307 "base_bdev_name": "Malloc2" 00:06:50.307 } 00:06:50.307 } 00:06:50.307 } 00:06:50.307 ]' 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.307 18:57:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:50.308 18:57:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:50.568 18:57:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:50.568 00:06:50.568 real 0m0.279s 00:06:50.568 user 0m0.175s 00:06:50.568 sys 0m0.039s 00:06:50.568 18:57:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.568 18:57:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:50.568 ************************************ 00:06:50.568 END TEST rpc_daemon_integrity 00:06:50.568 ************************************ 00:06:50.568 18:57:42 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:50.568 18:57:42 rpc -- rpc/rpc.sh@84 -- # killprocess 604431 00:06:50.568 18:57:42 rpc -- common/autotest_common.sh@950 -- # '[' -z 604431 ']' 00:06:50.568 18:57:42 rpc -- common/autotest_common.sh@954 -- # kill -0 604431 00:06:50.568 18:57:42 rpc -- common/autotest_common.sh@955 -- # uname 00:06:50.568 18:57:42 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:50.568 18:57:42 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 604431 00:06:50.568 18:57:42 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:50.568 18:57:42 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:50.568 18:57:42 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 604431' 00:06:50.568 killing process with pid 604431 00:06:50.568 18:57:42 rpc -- common/autotest_common.sh@969 -- # kill 604431 00:06:50.568 18:57:42 rpc -- common/autotest_common.sh@974 -- # wait 604431 00:06:50.828 00:06:50.828 real 0m2.509s 00:06:50.828 user 0m3.251s 00:06:50.828 sys 0m0.693s 00:06:50.828 18:57:43 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.828 18:57:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.828 ************************************ 00:06:50.828 END TEST rpc 00:06:50.828 ************************************ 00:06:50.828 18:57:43 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:50.828 18:57:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:50.828 18:57:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.828 18:57:43 -- common/autotest_common.sh@10 -- # set +x 00:06:50.828 ************************************ 00:06:50.828 START TEST skip_rpc 00:06:50.828 ************************************ 00:06:50.828 18:57:43 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:51.089 * Looking for test storage... 00:06:51.089 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:51.089 18:57:43 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:06:51.089 18:57:43 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:06:51.089 18:57:43 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:51.089 18:57:43 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.089 18:57:43 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.089 18:57:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.089 ************************************ 00:06:51.089 START TEST skip_rpc 00:06:51.089 ************************************ 00:06:51.089 18:57:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:51.089 18:57:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=605073 00:06:51.089 18:57:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:51.089 18:57:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:51.089 18:57:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:51.089 [2024-07-25 18:57:43.401648] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:51.089 [2024-07-25 18:57:43.401684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid605073 ] 00:06:51.089 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.089 [2024-07-25 18:57:43.467160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.089 [2024-07-25 18:57:43.536538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.368 18:57:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:56.368 18:57:48 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:56.368 18:57:48 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:56.368 18:57:48 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:56.368 18:57:48 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.368 18:57:48 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:56.368 18:57:48 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.368 18:57:48 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:56.368 18:57:48 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.368 18:57:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.368 18:57:48 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:56.368 18:57:48 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:56.368 18:57:48 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:56.368 18:57:48 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:56.368 18:57:48 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:56.368 18:57:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:56.368 18:57:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 605073 00:06:56.368 18:57:48 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 605073 ']' 00:06:56.368 18:57:48 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 605073 00:06:56.368 18:57:48 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:56.368 18:57:48 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:56.368 18:57:48 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 605073 00:06:56.368 18:57:48 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:56.368 18:57:48 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:56.368 18:57:48 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 605073' 00:06:56.368 killing process with pid 605073 00:06:56.368 18:57:48 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 605073 00:06:56.368 18:57:48 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 605073 00:06:56.368 00:06:56.368 real 0m5.370s 00:06:56.368 user 0m5.130s 00:06:56.368 sys 0m0.275s 00:06:56.368 18:57:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.368 18:57:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.368 ************************************ 00:06:56.368 END TEST skip_rpc 00:06:56.368 ************************************ 00:06:56.368 18:57:48 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:56.368 18:57:48 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:56.368 18:57:48 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.368 18:57:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.368 ************************************ 00:06:56.368 START TEST skip_rpc_with_json 00:06:56.368 ************************************ 00:06:56.368 18:57:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:56.368 18:57:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:56.368 18:57:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=606026 00:06:56.368 18:57:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:56.368 18:57:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:56.368 18:57:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 606026 00:06:56.368 18:57:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 606026 ']' 00:06:56.368 18:57:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.368 18:57:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.368 18:57:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.368 18:57:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.368 18:57:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:56.628 [2024-07-25 18:57:48.838947] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:56.628 [2024-07-25 18:57:48.838990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid606026 ] 00:06:56.628 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.628 [2024-07-25 18:57:48.904457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.628 [2024-07-25 18:57:48.982401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.197 18:57:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.197 18:57:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:57.197 18:57:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:57.197 18:57:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.197 18:57:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:57.457 [2024-07-25 18:57:49.669322] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:57.457 request: 00:06:57.457 { 00:06:57.457 "trtype": "tcp", 00:06:57.457 "method": "nvmf_get_transports", 00:06:57.457 "req_id": 1 00:06:57.457 } 00:06:57.457 Got JSON-RPC error response 00:06:57.457 response: 00:06:57.457 { 00:06:57.457 "code": -19, 00:06:57.457 "message": "No such device" 00:06:57.457 } 00:06:57.457 18:57:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:57.457 18:57:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:57.457 18:57:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.457 18:57:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:57.457 [2024-07-25 18:57:49.681436] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:57.457 18:57:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.457 18:57:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:57.457 18:57:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.457 18:57:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:57.457 18:57:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.457 18:57:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:06:57.457 { 00:06:57.457 "subsystems": [ 00:06:57.457 { 00:06:57.457 "subsystem": "keyring", 00:06:57.457 "config": [] 00:06:57.457 }, 00:06:57.457 { 00:06:57.457 "subsystem": "iobuf", 00:06:57.457 "config": [ 00:06:57.457 { 00:06:57.457 "method": "iobuf_set_options", 00:06:57.457 "params": { 00:06:57.457 "small_pool_count": 8192, 00:06:57.457 "large_pool_count": 1024, 00:06:57.457 "small_bufsize": 8192, 00:06:57.457 "large_bufsize": 135168 00:06:57.457 } 00:06:57.457 } 00:06:57.457 ] 00:06:57.457 }, 00:06:57.457 { 00:06:57.457 "subsystem": "sock", 00:06:57.457 "config": [ 00:06:57.457 { 00:06:57.457 "method": "sock_set_default_impl", 00:06:57.457 "params": { 00:06:57.457 "impl_name": "posix" 00:06:57.457 } 00:06:57.457 }, 00:06:57.457 { 00:06:57.457 "method": "sock_impl_set_options", 00:06:57.457 "params": { 00:06:57.457 "impl_name": "ssl", 00:06:57.457 "recv_buf_size": 4096, 00:06:57.457 "send_buf_size": 4096, 00:06:57.457 "enable_recv_pipe": true, 00:06:57.457 "enable_quickack": false, 00:06:57.457 "enable_placement_id": 0, 00:06:57.457 "enable_zerocopy_send_server": true, 00:06:57.457 "enable_zerocopy_send_client": false, 00:06:57.457 "zerocopy_threshold": 0, 00:06:57.457 "tls_version": 0, 00:06:57.457 "enable_ktls": false 00:06:57.457 } 00:06:57.457 }, 00:06:57.457 { 00:06:57.457 "method": "sock_impl_set_options", 00:06:57.457 "params": { 00:06:57.457 "impl_name": "posix", 00:06:57.457 "recv_buf_size": 2097152, 00:06:57.457 "send_buf_size": 2097152, 00:06:57.457 "enable_recv_pipe": true, 00:06:57.457 "enable_quickack": false, 00:06:57.457 "enable_placement_id": 0, 00:06:57.457 "enable_zerocopy_send_server": true, 00:06:57.457 "enable_zerocopy_send_client": false, 00:06:57.457 "zerocopy_threshold": 0, 00:06:57.457 "tls_version": 0, 00:06:57.457 "enable_ktls": false 00:06:57.457 } 00:06:57.457 } 00:06:57.457 ] 00:06:57.457 }, 00:06:57.457 { 00:06:57.457 "subsystem": "vmd", 00:06:57.457 "config": [] 00:06:57.457 }, 00:06:57.457 { 00:06:57.457 "subsystem": "accel", 00:06:57.457 "config": [ 00:06:57.457 { 00:06:57.457 "method": "accel_set_options", 00:06:57.457 "params": { 00:06:57.457 "small_cache_size": 128, 00:06:57.457 "large_cache_size": 16, 00:06:57.457 "task_count": 2048, 00:06:57.457 "sequence_count": 2048, 00:06:57.457 "buf_count": 2048 00:06:57.457 } 00:06:57.457 } 00:06:57.457 ] 00:06:57.457 }, 00:06:57.457 { 00:06:57.457 "subsystem": "bdev", 00:06:57.457 "config": [ 00:06:57.457 { 00:06:57.457 "method": "bdev_set_options", 00:06:57.457 "params": { 00:06:57.457 "bdev_io_pool_size": 65535, 00:06:57.457 "bdev_io_cache_size": 256, 00:06:57.457 "bdev_auto_examine": true, 00:06:57.457 "iobuf_small_cache_size": 128, 00:06:57.457 "iobuf_large_cache_size": 16 00:06:57.457 } 00:06:57.457 }, 00:06:57.457 { 00:06:57.457 "method": "bdev_raid_set_options", 00:06:57.457 "params": { 00:06:57.457 "process_window_size_kb": 1024, 00:06:57.457 "process_max_bandwidth_mb_sec": 0 00:06:57.457 } 00:06:57.457 }, 00:06:57.457 { 00:06:57.457 "method": "bdev_iscsi_set_options", 00:06:57.457 "params": { 00:06:57.457 "timeout_sec": 30 00:06:57.457 } 00:06:57.457 }, 00:06:57.457 { 00:06:57.457 "method": "bdev_nvme_set_options", 00:06:57.457 "params": { 00:06:57.457 "action_on_timeout": "none", 00:06:57.457 "timeout_us": 0, 00:06:57.457 "timeout_admin_us": 0, 00:06:57.457 "keep_alive_timeout_ms": 10000, 00:06:57.457 "arbitration_burst": 0, 00:06:57.457 "low_priority_weight": 0, 00:06:57.457 "medium_priority_weight": 0, 00:06:57.457 "high_priority_weight": 0, 00:06:57.457 "nvme_adminq_poll_period_us": 10000, 00:06:57.457 "nvme_ioq_poll_period_us": 0, 00:06:57.457 "io_queue_requests": 0, 00:06:57.457 "delay_cmd_submit": true, 00:06:57.457 "transport_retry_count": 4, 00:06:57.457 "bdev_retry_count": 3, 00:06:57.457 "transport_ack_timeout": 0, 00:06:57.457 "ctrlr_loss_timeout_sec": 0, 00:06:57.457 "reconnect_delay_sec": 0, 00:06:57.457 "fast_io_fail_timeout_sec": 0, 00:06:57.457 "disable_auto_failback": false, 00:06:57.457 "generate_uuids": false, 00:06:57.457 "transport_tos": 0, 00:06:57.457 "nvme_error_stat": false, 00:06:57.457 "rdma_srq_size": 0, 00:06:57.457 "io_path_stat": false, 00:06:57.457 "allow_accel_sequence": false, 00:06:57.457 "rdma_max_cq_size": 0, 00:06:57.457 "rdma_cm_event_timeout_ms": 0, 00:06:57.457 "dhchap_digests": [ 00:06:57.457 "sha256", 00:06:57.457 "sha384", 00:06:57.457 "sha512" 00:06:57.457 ], 00:06:57.457 "dhchap_dhgroups": [ 00:06:57.457 "null", 00:06:57.457 "ffdhe2048", 00:06:57.457 "ffdhe3072", 00:06:57.457 "ffdhe4096", 00:06:57.457 "ffdhe6144", 00:06:57.457 "ffdhe8192" 00:06:57.457 ] 00:06:57.457 } 00:06:57.457 }, 00:06:57.457 { 00:06:57.457 "method": "bdev_nvme_set_hotplug", 00:06:57.457 "params": { 00:06:57.457 "period_us": 100000, 00:06:57.457 "enable": false 00:06:57.457 } 00:06:57.457 }, 00:06:57.457 { 00:06:57.457 "method": "bdev_wait_for_examine" 00:06:57.457 } 00:06:57.457 ] 00:06:57.457 }, 00:06:57.457 { 00:06:57.457 "subsystem": "scsi", 00:06:57.457 "config": null 00:06:57.457 }, 00:06:57.457 { 00:06:57.457 "subsystem": "scheduler", 00:06:57.457 "config": [ 00:06:57.457 { 00:06:57.457 "method": "framework_set_scheduler", 00:06:57.457 "params": { 00:06:57.457 "name": "static" 00:06:57.457 } 00:06:57.457 } 00:06:57.457 ] 00:06:57.457 }, 00:06:57.457 { 00:06:57.457 "subsystem": "vhost_scsi", 00:06:57.457 "config": [] 00:06:57.457 }, 00:06:57.457 { 00:06:57.457 "subsystem": "vhost_blk", 00:06:57.457 "config": [] 00:06:57.457 }, 00:06:57.457 { 00:06:57.457 "subsystem": "ublk", 00:06:57.457 "config": [] 00:06:57.457 }, 00:06:57.458 { 00:06:57.458 "subsystem": "nbd", 00:06:57.458 "config": [] 00:06:57.458 }, 00:06:57.458 { 00:06:57.458 "subsystem": "nvmf", 00:06:57.458 "config": [ 00:06:57.458 { 00:06:57.458 "method": "nvmf_set_config", 00:06:57.458 "params": { 00:06:57.458 "discovery_filter": "match_any", 00:06:57.458 "admin_cmd_passthru": { 00:06:57.458 "identify_ctrlr": false 00:06:57.458 } 00:06:57.458 } 00:06:57.458 }, 00:06:57.458 { 00:06:57.458 "method": "nvmf_set_max_subsystems", 00:06:57.458 "params": { 00:06:57.458 "max_subsystems": 1024 00:06:57.458 } 00:06:57.458 }, 00:06:57.458 { 00:06:57.458 "method": "nvmf_set_crdt", 00:06:57.458 "params": { 00:06:57.458 "crdt1": 0, 00:06:57.458 "crdt2": 0, 00:06:57.458 "crdt3": 0 00:06:57.458 } 00:06:57.458 }, 00:06:57.458 { 00:06:57.458 "method": "nvmf_create_transport", 00:06:57.458 "params": { 00:06:57.458 "trtype": "TCP", 00:06:57.458 "max_queue_depth": 128, 00:06:57.458 "max_io_qpairs_per_ctrlr": 127, 00:06:57.458 "in_capsule_data_size": 4096, 00:06:57.458 "max_io_size": 131072, 00:06:57.458 "io_unit_size": 131072, 00:06:57.458 "max_aq_depth": 128, 00:06:57.458 "num_shared_buffers": 511, 00:06:57.458 "buf_cache_size": 4294967295, 00:06:57.458 "dif_insert_or_strip": false, 00:06:57.458 "zcopy": false, 00:06:57.458 "c2h_success": true, 00:06:57.458 "sock_priority": 0, 00:06:57.458 "abort_timeout_sec": 1, 00:06:57.458 "ack_timeout": 0, 00:06:57.458 "data_wr_pool_size": 0 00:06:57.458 } 00:06:57.458 } 00:06:57.458 ] 00:06:57.458 }, 00:06:57.458 { 00:06:57.458 "subsystem": "iscsi", 00:06:57.458 "config": [ 00:06:57.458 { 00:06:57.458 "method": "iscsi_set_options", 00:06:57.458 "params": { 00:06:57.458 "node_base": "iqn.2016-06.io.spdk", 00:06:57.458 "max_sessions": 128, 00:06:57.458 "max_connections_per_session": 2, 00:06:57.458 "max_queue_depth": 64, 00:06:57.458 "default_time2wait": 2, 00:06:57.458 "default_time2retain": 20, 00:06:57.458 "first_burst_length": 8192, 00:06:57.458 "immediate_data": true, 00:06:57.458 "allow_duplicated_isid": false, 00:06:57.458 "error_recovery_level": 0, 00:06:57.458 "nop_timeout": 60, 00:06:57.458 "nop_in_interval": 30, 00:06:57.458 "disable_chap": false, 00:06:57.458 "require_chap": false, 00:06:57.458 "mutual_chap": false, 00:06:57.458 "chap_group": 0, 00:06:57.458 "max_large_datain_per_connection": 64, 00:06:57.458 "max_r2t_per_connection": 4, 00:06:57.458 "pdu_pool_size": 36864, 00:06:57.458 "immediate_data_pool_size": 16384, 00:06:57.458 "data_out_pool_size": 2048 00:06:57.458 } 00:06:57.458 } 00:06:57.458 ] 00:06:57.458 } 00:06:57.458 ] 00:06:57.458 } 00:06:57.458 18:57:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:57.458 18:57:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 606026 00:06:57.458 18:57:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 606026 ']' 00:06:57.458 18:57:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 606026 00:06:57.458 18:57:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:57.458 18:57:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.458 18:57:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 606026 00:06:57.458 18:57:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.458 18:57:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.458 18:57:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 606026' 00:06:57.458 killing process with pid 606026 00:06:57.458 18:57:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 606026 00:06:57.458 18:57:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 606026 00:06:58.027 18:57:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=606278 00:06:58.027 18:57:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:58.027 18:57:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:07:03.325 18:57:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 606278 00:07:03.325 18:57:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 606278 ']' 00:07:03.325 18:57:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 606278 00:07:03.325 18:57:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:07:03.325 18:57:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:03.325 18:57:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 606278 00:07:03.325 18:57:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:03.325 18:57:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:03.325 18:57:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 606278' 00:07:03.325 killing process with pid 606278 00:07:03.325 18:57:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 606278 00:07:03.325 18:57:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 606278 00:07:03.325 18:57:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:07:03.325 18:57:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:07:03.325 00:07:03.325 real 0m6.784s 00:07:03.325 user 0m6.609s 00:07:03.325 sys 0m0.642s 00:07:03.325 18:57:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.325 18:57:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:03.325 ************************************ 00:07:03.325 END TEST skip_rpc_with_json 00:07:03.325 ************************************ 00:07:03.325 18:57:55 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:03.325 18:57:55 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.325 18:57:55 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.325 18:57:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.325 ************************************ 00:07:03.325 START TEST skip_rpc_with_delay 00:07:03.325 ************************************ 00:07:03.325 18:57:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:07:03.325 18:57:55 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:03.325 18:57:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:07:03.325 18:57:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:03.325 18:57:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:03.325 18:57:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.325 18:57:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:03.325 18:57:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.325 18:57:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:03.325 18:57:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.325 18:57:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:03.325 18:57:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:03.325 18:57:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:03.325 [2024-07-25 18:57:55.697906] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:03.325 [2024-07-25 18:57:55.697970] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:07:03.325 18:57:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:07:03.325 18:57:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.325 18:57:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:03.325 18:57:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.325 00:07:03.325 real 0m0.065s 00:07:03.325 user 0m0.046s 00:07:03.325 sys 0m0.019s 00:07:03.325 18:57:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.325 18:57:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:03.325 ************************************ 00:07:03.325 END TEST skip_rpc_with_delay 00:07:03.325 ************************************ 00:07:03.325 18:57:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:03.325 18:57:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:03.325 18:57:55 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:03.325 18:57:55 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.325 18:57:55 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.325 18:57:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.325 ************************************ 00:07:03.325 START TEST exit_on_failed_rpc_init 00:07:03.325 ************************************ 00:07:03.325 18:57:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:07:03.325 18:57:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=607257 00:07:03.325 18:57:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 607257 00:07:03.325 18:57:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:03.325 18:57:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 607257 ']' 00:07:03.325 18:57:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.325 18:57:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.325 18:57:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.325 18:57:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.325 18:57:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:03.586 [2024-07-25 18:57:55.833878] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:03.586 [2024-07-25 18:57:55.833929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid607257 ] 00:07:03.586 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.586 [2024-07-25 18:57:55.902914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.586 [2024-07-25 18:57:55.972184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.524 18:57:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.524 18:57:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:07:04.524 18:57:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:04.524 18:57:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:04.524 18:57:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:07:04.525 18:57:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:04.525 18:57:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:04.525 18:57:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.525 18:57:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:04.525 18:57:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.525 18:57:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:04.525 18:57:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.525 18:57:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:04.525 18:57:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:04.525 18:57:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:04.525 [2024-07-25 18:57:56.727717] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:04.525 [2024-07-25 18:57:56.727759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid607493 ] 00:07:04.525 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.525 [2024-07-25 18:57:56.793302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.525 [2024-07-25 18:57:56.865761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.525 [2024-07-25 18:57:56.865829] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:04.525 [2024-07-25 18:57:56.865838] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:04.525 [2024-07-25 18:57:56.865844] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:04.525 18:57:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:07:04.525 18:57:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.525 18:57:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:07:04.525 18:57:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:07:04.525 18:57:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:07:04.525 18:57:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.525 18:57:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:04.525 18:57:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 607257 00:07:04.525 18:57:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 607257 ']' 00:07:04.525 18:57:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 607257 00:07:04.525 18:57:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:07:04.525 18:57:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:04.525 18:57:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 607257 00:07:04.525 18:57:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:04.525 18:57:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:04.525 18:57:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 607257' 00:07:04.525 killing process with pid 607257 00:07:04.525 18:57:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 607257 00:07:04.525 18:57:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 607257 00:07:05.095 00:07:05.095 real 0m1.511s 00:07:05.095 user 0m1.763s 00:07:05.095 sys 0m0.426s 00:07:05.095 18:57:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.095 18:57:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:05.095 ************************************ 00:07:05.095 END TEST exit_on_failed_rpc_init 00:07:05.095 ************************************ 00:07:05.095 18:57:57 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:07:05.095 00:07:05.095 real 0m14.099s 00:07:05.095 user 0m13.697s 00:07:05.095 sys 0m1.605s 00:07:05.095 18:57:57 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.095 18:57:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.095 ************************************ 00:07:05.095 END TEST skip_rpc 00:07:05.095 ************************************ 00:07:05.095 18:57:57 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:05.095 18:57:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.095 18:57:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.095 18:57:57 -- common/autotest_common.sh@10 -- # set +x 00:07:05.095 ************************************ 00:07:05.095 START TEST rpc_client 00:07:05.095 ************************************ 00:07:05.095 18:57:57 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:05.095 * Looking for test storage... 00:07:05.095 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:07:05.095 18:57:57 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:07:05.095 OK 00:07:05.095 18:57:57 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:05.095 00:07:05.095 real 0m0.113s 00:07:05.095 user 0m0.051s 00:07:05.095 sys 0m0.070s 00:07:05.095 18:57:57 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.095 18:57:57 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:05.095 ************************************ 00:07:05.095 END TEST rpc_client 00:07:05.095 ************************************ 00:07:05.095 18:57:57 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:07:05.095 18:57:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.095 18:57:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.095 18:57:57 -- common/autotest_common.sh@10 -- # set +x 00:07:05.354 ************************************ 00:07:05.354 START TEST json_config 00:07:05.354 ************************************ 00:07:05.354 18:57:57 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:07:05.354 18:57:57 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.354 18:57:57 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:05.354 18:57:57 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.354 18:57:57 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.354 18:57:57 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.354 18:57:57 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.354 18:57:57 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.354 18:57:57 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.354 18:57:57 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.354 18:57:57 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.354 18:57:57 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.354 18:57:57 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.354 18:57:57 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:07:05.354 18:57:57 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:07:05.354 18:57:57 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.354 18:57:57 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.354 18:57:57 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:05.354 18:57:57 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.354 18:57:57 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:05.354 18:57:57 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.354 18:57:57 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.354 18:57:57 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.354 18:57:57 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.354 18:57:57 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.354 18:57:57 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.354 18:57:57 json_config -- paths/export.sh@5 -- # export PATH 00:07:05.354 18:57:57 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.354 18:57:57 json_config -- nvmf/common.sh@47 -- # : 0 00:07:05.354 18:57:57 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:05.354 18:57:57 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:05.354 18:57:57 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.354 18:57:57 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.354 18:57:57 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.354 18:57:57 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:05.354 18:57:57 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:05.354 18:57:57 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:05.354 18:57:57 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:07:05.354 18:57:57 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:05.354 18:57:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:05.354 18:57:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:05.354 18:57:57 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:05.354 18:57:57 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:05.354 18:57:57 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:05.355 18:57:57 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:05.355 18:57:57 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:05.355 18:57:57 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:05.355 18:57:57 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:05.355 18:57:57 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:07:05.355 18:57:57 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:05.355 18:57:57 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:05.355 18:57:57 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:05.355 18:57:57 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:07:05.355 INFO: JSON configuration test init 00:07:05.355 18:57:57 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:07:05.355 18:57:57 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:07:05.355 18:57:57 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:05.355 18:57:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:05.355 18:57:57 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:07:05.355 18:57:57 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:05.355 18:57:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:05.355 18:57:57 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:07:05.355 18:57:57 json_config -- json_config/common.sh@9 -- # local app=target 00:07:05.355 18:57:57 json_config -- json_config/common.sh@10 -- # shift 00:07:05.355 18:57:57 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:05.355 18:57:57 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:05.355 18:57:57 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:05.355 18:57:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:05.355 18:57:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:05.355 18:57:57 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=607686 00:07:05.355 18:57:57 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:05.355 Waiting for target to run... 00:07:05.355 18:57:57 json_config -- json_config/common.sh@25 -- # waitforlisten 607686 /var/tmp/spdk_tgt.sock 00:07:05.355 18:57:57 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:05.355 18:57:57 json_config -- common/autotest_common.sh@831 -- # '[' -z 607686 ']' 00:07:05.355 18:57:57 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:05.355 18:57:57 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.355 18:57:57 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:05.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:05.355 18:57:57 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.355 18:57:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:05.355 [2024-07-25 18:57:57.739941] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:05.355 [2024-07-25 18:57:57.739995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid607686 ] 00:07:05.355 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.612 [2024-07-25 18:57:58.016638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.612 [2024-07-25 18:57:58.080442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.178 18:57:58 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.178 18:57:58 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:06.178 18:57:58 json_config -- json_config/common.sh@26 -- # echo '' 00:07:06.178 00:07:06.178 18:57:58 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:07:06.178 18:57:58 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:07:06.178 18:57:58 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:06.178 18:57:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:06.178 18:57:58 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:07:06.178 18:57:58 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:07:06.178 18:57:58 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:06.178 18:57:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:06.179 18:57:58 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:06.179 18:57:58 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:07:06.179 18:57:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:09.469 18:58:01 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:07:09.469 18:58:01 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:09.469 18:58:01 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:09.469 18:58:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:09.469 18:58:01 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:09.469 18:58:01 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:09.469 18:58:01 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:09.469 18:58:01 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:07:09.469 18:58:01 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:07:09.469 18:58:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:09.469 18:58:01 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:07:09.469 18:58:01 json_config -- json_config/json_config.sh@48 -- # local get_types 00:07:09.469 18:58:01 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:07:09.470 18:58:01 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:07:09.470 18:58:01 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:07:09.470 18:58:01 json_config -- json_config/json_config.sh@51 -- # sort 00:07:09.470 18:58:01 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:07:09.470 18:58:01 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:07:09.470 18:58:01 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:07:09.470 18:58:01 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:07:09.470 18:58:01 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:09.470 18:58:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:09.728 18:58:01 json_config -- json_config/json_config.sh@59 -- # return 0 00:07:09.728 18:58:01 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:07:09.728 18:58:01 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:07:09.728 18:58:01 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:07:09.728 18:58:01 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:07:09.728 18:58:01 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:07:09.728 18:58:01 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:07:09.728 18:58:01 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:09.728 18:58:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:09.729 18:58:01 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:09.729 18:58:01 json_config -- json_config/json_config.sh@237 -- # [[ rdma == \r\d\m\a ]] 00:07:09.729 18:58:01 json_config -- json_config/json_config.sh@238 -- # TEST_TRANSPORT=rdma 00:07:09.729 18:58:01 json_config -- json_config/json_config.sh@238 -- # nvmftestinit 00:07:09.729 18:58:01 json_config -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:09.729 18:58:01 json_config -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:09.729 18:58:01 json_config -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:09.729 18:58:01 json_config -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:09.729 18:58:01 json_config -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:09.729 18:58:01 json_config -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.729 18:58:01 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:07:09.729 18:58:01 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:09.729 18:58:01 json_config -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:07:09.729 18:58:01 json_config -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:09.729 18:58:01 json_config -- nvmf/common.sh@285 -- # xtrace_disable 00:07:09.729 18:58:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@291 -- # pci_devs=() 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@295 -- # net_devs=() 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@296 -- # e810=() 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@296 -- # local -ga e810 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@297 -- # x722=() 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@297 -- # local -ga x722 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@298 -- # mlx=() 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@298 -- # local -ga mlx 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:07:14.998 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:07:14.998 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:07:14.998 Found net devices under 0000:af:00.0: mlx_0_0 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:07:14.998 Found net devices under 0000:af:00.1: mlx_0_1 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@414 -- # is_hw=yes 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:14.998 18:58:07 json_config -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:14.999 18:58:07 json_config -- nvmf/common.sh@420 -- # rdma_device_init 00:07:14.999 18:58:07 json_config -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:14.999 18:58:07 json_config -- nvmf/common.sh@58 -- # uname 00:07:14.999 18:58:07 json_config -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:14.999 18:58:07 json_config -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:14.999 18:58:07 json_config -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:14.999 18:58:07 json_config -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:14.999 18:58:07 json_config -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:14.999 18:58:07 json_config -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:14.999 18:58:07 json_config -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:14.999 18:58:07 json_config -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:14.999 18:58:07 json_config -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:14.999 18:58:07 json_config -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:14.999 18:58:07 json_config -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:14.999 18:58:07 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:14.999 18:58:07 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:14.999 18:58:07 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:14.999 18:58:07 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:14.999 18:58:07 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:14.999 18:58:07 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:14.999 18:58:07 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:14.999 18:58:07 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:14.999 18:58:07 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:14.999 18:58:07 json_config -- nvmf/common.sh@105 -- # continue 2 00:07:14.999 18:58:07 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:14.999 18:58:07 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:14.999 18:58:07 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:14.999 18:58:07 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:14.999 18:58:07 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:15.257 18:58:07 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:15.257 18:58:07 json_config -- nvmf/common.sh@105 -- # continue 2 00:07:15.257 18:58:07 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:15.257 18:58:07 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:15.257 18:58:07 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:15.257 18:58:07 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:15.257 18:58:07 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:15.257 18:58:07 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:15.257 18:58:07 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:15.257 18:58:07 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:15.257 18:58:07 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:15.257 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:15.257 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:07:15.257 altname enp175s0f0np0 00:07:15.257 altname ens801f0np0 00:07:15.257 inet 192.168.100.8/24 scope global mlx_0_0 00:07:15.257 valid_lft forever preferred_lft forever 00:07:15.257 18:58:07 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:15.257 18:58:07 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:15.257 18:58:07 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:15.257 18:58:07 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:15.257 18:58:07 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:15.257 18:58:07 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:15.257 18:58:07 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:15.257 18:58:07 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:15.257 18:58:07 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:15.257 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:15.257 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:07:15.257 altname enp175s0f1np1 00:07:15.257 altname ens801f1np1 00:07:15.257 inet 192.168.100.9/24 scope global mlx_0_1 00:07:15.257 valid_lft forever preferred_lft forever 00:07:15.257 18:58:07 json_config -- nvmf/common.sh@422 -- # return 0 00:07:15.257 18:58:07 json_config -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:15.257 18:58:07 json_config -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:15.257 18:58:07 json_config -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:15.257 18:58:07 json_config -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:15.257 18:58:07 json_config -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:15.257 18:58:07 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:15.257 18:58:07 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:15.257 18:58:07 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:15.257 18:58:07 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:15.257 18:58:07 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:15.257 18:58:07 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:15.257 18:58:07 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:15.257 18:58:07 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:15.258 18:58:07 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:15.258 18:58:07 json_config -- nvmf/common.sh@105 -- # continue 2 00:07:15.258 18:58:07 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:15.258 18:58:07 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:15.258 18:58:07 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:15.258 18:58:07 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:15.258 18:58:07 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:15.258 18:58:07 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:15.258 18:58:07 json_config -- nvmf/common.sh@105 -- # continue 2 00:07:15.258 18:58:07 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:15.258 18:58:07 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:15.258 18:58:07 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:15.258 18:58:07 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:15.258 18:58:07 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:15.258 18:58:07 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:15.258 18:58:07 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:15.258 18:58:07 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:15.258 18:58:07 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:15.258 18:58:07 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:15.258 18:58:07 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:15.258 18:58:07 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:15.258 18:58:07 json_config -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:15.258 192.168.100.9' 00:07:15.258 18:58:07 json_config -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:15.258 192.168.100.9' 00:07:15.258 18:58:07 json_config -- nvmf/common.sh@457 -- # head -n 1 00:07:15.258 18:58:07 json_config -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:15.258 18:58:07 json_config -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:15.258 192.168.100.9' 00:07:15.258 18:58:07 json_config -- nvmf/common.sh@458 -- # tail -n +2 00:07:15.258 18:58:07 json_config -- nvmf/common.sh@458 -- # head -n 1 00:07:15.258 18:58:07 json_config -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:15.258 18:58:07 json_config -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:15.258 18:58:07 json_config -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:15.258 18:58:07 json_config -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:15.258 18:58:07 json_config -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:15.258 18:58:07 json_config -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:15.258 18:58:07 json_config -- json_config/json_config.sh@241 -- # [[ -z 192.168.100.8 ]] 00:07:15.258 18:58:07 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:15.258 18:58:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:15.516 MallocForNvmf0 00:07:15.516 18:58:07 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:15.517 18:58:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:15.517 MallocForNvmf1 00:07:15.517 18:58:07 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:07:15.517 18:58:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:07:15.776 [2024-07-25 18:58:08.138447] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:07:15.776 [2024-07-25 18:58:08.164380] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x121b070/0x1080ec0) succeed. 00:07:15.776 [2024-07-25 18:58:08.175134] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x121b240/0x1100f00) succeed. 00:07:15.776 18:58:08 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:15.776 18:58:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:16.035 18:58:08 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:16.035 18:58:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:16.294 18:58:08 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:16.294 18:58:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:16.553 18:58:08 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:16.553 18:58:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:16.553 [2024-07-25 18:58:08.965107] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:16.553 18:58:08 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:07:16.553 18:58:08 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:16.553 18:58:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:16.813 18:58:09 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:07:16.813 18:58:09 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:16.813 18:58:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:16.813 18:58:09 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:07:16.813 18:58:09 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:16.813 18:58:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:16.813 MallocBdevForConfigChangeCheck 00:07:16.813 18:58:09 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:07:16.813 18:58:09 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:16.813 18:58:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:17.072 18:58:09 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:07:17.072 18:58:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:17.331 18:58:09 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:07:17.331 INFO: shutting down applications... 00:07:17.331 18:58:09 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:07:17.331 18:58:09 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:07:17.331 18:58:09 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:07:17.331 18:58:09 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:18.712 Calling clear_iscsi_subsystem 00:07:18.712 Calling clear_nvmf_subsystem 00:07:18.712 Calling clear_nbd_subsystem 00:07:18.712 Calling clear_ublk_subsystem 00:07:18.712 Calling clear_vhost_blk_subsystem 00:07:18.712 Calling clear_vhost_scsi_subsystem 00:07:18.712 Calling clear_bdev_subsystem 00:07:18.712 18:58:11 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:07:18.712 18:58:11 json_config -- json_config/json_config.sh@347 -- # count=100 00:07:18.712 18:58:11 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:07:18.712 18:58:11 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:18.712 18:58:11 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:18.712 18:58:11 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:07:19.280 18:58:11 json_config -- json_config/json_config.sh@349 -- # break 00:07:19.280 18:58:11 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:07:19.280 18:58:11 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:07:19.280 18:58:11 json_config -- json_config/common.sh@31 -- # local app=target 00:07:19.280 18:58:11 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:19.280 18:58:11 json_config -- json_config/common.sh@35 -- # [[ -n 607686 ]] 00:07:19.280 18:58:11 json_config -- json_config/common.sh@38 -- # kill -SIGINT 607686 00:07:19.280 18:58:11 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:19.280 18:58:11 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:19.280 18:58:11 json_config -- json_config/common.sh@41 -- # kill -0 607686 00:07:19.280 18:58:11 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:19.540 18:58:12 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:19.540 18:58:12 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:19.800 18:58:12 json_config -- json_config/common.sh@41 -- # kill -0 607686 00:07:19.800 18:58:12 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:19.800 18:58:12 json_config -- json_config/common.sh@43 -- # break 00:07:19.800 18:58:12 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:19.800 18:58:12 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:19.800 SPDK target shutdown done 00:07:19.800 18:58:12 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:07:19.800 INFO: relaunching applications... 00:07:19.800 18:58:12 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:19.800 18:58:12 json_config -- json_config/common.sh@9 -- # local app=target 00:07:19.800 18:58:12 json_config -- json_config/common.sh@10 -- # shift 00:07:19.800 18:58:12 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:19.800 18:58:12 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:19.800 18:58:12 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:19.800 18:58:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:19.800 18:58:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:19.800 18:58:12 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=612404 00:07:19.800 18:58:12 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:19.800 Waiting for target to run... 00:07:19.800 18:58:12 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:19.800 18:58:12 json_config -- json_config/common.sh@25 -- # waitforlisten 612404 /var/tmp/spdk_tgt.sock 00:07:19.800 18:58:12 json_config -- common/autotest_common.sh@831 -- # '[' -z 612404 ']' 00:07:19.800 18:58:12 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:19.800 18:58:12 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:19.800 18:58:12 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:19.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:19.800 18:58:12 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:19.800 18:58:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:19.800 [2024-07-25 18:58:12.065473] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:19.800 [2024-07-25 18:58:12.065524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid612404 ] 00:07:19.800 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.060 [2024-07-25 18:58:12.518494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.319 [2024-07-25 18:58:12.604840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.607 [2024-07-25 18:58:15.643181] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdaed40/0xc14780) succeed. 00:07:23.607 [2024-07-25 18:58:15.654806] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xdabda0/0xc947c0) succeed. 00:07:23.607 [2024-07-25 18:58:15.711088] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:23.866 18:58:16 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:23.866 18:58:16 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:23.866 18:58:16 json_config -- json_config/common.sh@26 -- # echo '' 00:07:23.866 00:07:23.866 18:58:16 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:07:23.866 18:58:16 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:23.866 INFO: Checking if target configuration is the same... 00:07:23.866 18:58:16 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:23.866 18:58:16 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:07:23.866 18:58:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:23.866 + '[' 2 -ne 2 ']' 00:07:23.866 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:23.866 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:07:23.866 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:23.866 +++ basename /dev/fd/62 00:07:23.866 ++ mktemp /tmp/62.XXX 00:07:23.866 + tmp_file_1=/tmp/62.uPM 00:07:23.866 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:23.866 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:23.866 + tmp_file_2=/tmp/spdk_tgt_config.json.HyG 00:07:23.866 + ret=0 00:07:23.866 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:24.434 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:24.434 + diff -u /tmp/62.uPM /tmp/spdk_tgt_config.json.HyG 00:07:24.434 + echo 'INFO: JSON config files are the same' 00:07:24.434 INFO: JSON config files are the same 00:07:24.434 + rm /tmp/62.uPM /tmp/spdk_tgt_config.json.HyG 00:07:24.434 + exit 0 00:07:24.434 18:58:16 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:07:24.434 18:58:16 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:24.434 INFO: changing configuration and checking if this can be detected... 00:07:24.434 18:58:16 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:24.434 18:58:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:24.434 18:58:16 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:24.434 18:58:16 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:07:24.434 18:58:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:24.434 + '[' 2 -ne 2 ']' 00:07:24.434 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:24.434 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:07:24.434 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:24.434 +++ basename /dev/fd/62 00:07:24.434 ++ mktemp /tmp/62.XXX 00:07:24.434 + tmp_file_1=/tmp/62.jiw 00:07:24.434 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:24.434 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:24.434 + tmp_file_2=/tmp/spdk_tgt_config.json.3ft 00:07:24.434 + ret=0 00:07:24.434 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:25.003 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:25.003 + diff -u /tmp/62.jiw /tmp/spdk_tgt_config.json.3ft 00:07:25.003 + ret=1 00:07:25.003 + echo '=== Start of file: /tmp/62.jiw ===' 00:07:25.003 + cat /tmp/62.jiw 00:07:25.003 + echo '=== End of file: /tmp/62.jiw ===' 00:07:25.003 + echo '' 00:07:25.003 + echo '=== Start of file: /tmp/spdk_tgt_config.json.3ft ===' 00:07:25.003 + cat /tmp/spdk_tgt_config.json.3ft 00:07:25.003 + echo '=== End of file: /tmp/spdk_tgt_config.json.3ft ===' 00:07:25.003 + echo '' 00:07:25.003 + rm /tmp/62.jiw /tmp/spdk_tgt_config.json.3ft 00:07:25.003 + exit 1 00:07:25.003 18:58:17 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:07:25.003 INFO: configuration change detected. 00:07:25.003 18:58:17 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:07:25.003 18:58:17 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:07:25.003 18:58:17 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:25.003 18:58:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:25.003 18:58:17 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:07:25.003 18:58:17 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:07:25.003 18:58:17 json_config -- json_config/json_config.sh@321 -- # [[ -n 612404 ]] 00:07:25.003 18:58:17 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:07:25.003 18:58:17 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:07:25.003 18:58:17 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:25.003 18:58:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:25.003 18:58:17 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:07:25.003 18:58:17 json_config -- json_config/json_config.sh@197 -- # uname -s 00:07:25.003 18:58:17 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:07:25.003 18:58:17 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:07:25.003 18:58:17 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:07:25.003 18:58:17 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:07:25.003 18:58:17 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:25.003 18:58:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:25.003 18:58:17 json_config -- json_config/json_config.sh@327 -- # killprocess 612404 00:07:25.003 18:58:17 json_config -- common/autotest_common.sh@950 -- # '[' -z 612404 ']' 00:07:25.003 18:58:17 json_config -- common/autotest_common.sh@954 -- # kill -0 612404 00:07:25.003 18:58:17 json_config -- common/autotest_common.sh@955 -- # uname 00:07:25.003 18:58:17 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:25.003 18:58:17 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 612404 00:07:25.003 18:58:17 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:25.003 18:58:17 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:25.003 18:58:17 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 612404' 00:07:25.003 killing process with pid 612404 00:07:25.003 18:58:17 json_config -- common/autotest_common.sh@969 -- # kill 612404 00:07:25.003 18:58:17 json_config -- common/autotest_common.sh@974 -- # wait 612404 00:07:26.910 18:58:18 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:26.910 18:58:18 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:07:26.910 18:58:18 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:26.910 18:58:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:26.910 18:58:18 json_config -- json_config/json_config.sh@332 -- # return 0 00:07:26.910 18:58:18 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:07:26.910 INFO: Success 00:07:26.910 18:58:18 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:07:26.910 18:58:18 json_config -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:26.910 18:58:18 json_config -- nvmf/common.sh@117 -- # sync 00:07:26.910 18:58:18 json_config -- nvmf/common.sh@119 -- # '[' '' == tcp ']' 00:07:26.910 18:58:18 json_config -- nvmf/common.sh@119 -- # '[' '' == rdma ']' 00:07:26.910 18:58:18 json_config -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:26.910 18:58:18 json_config -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:26.910 18:58:18 json_config -- nvmf/common.sh@495 -- # [[ '' == \t\c\p ]] 00:07:26.910 00:07:26.910 real 0m21.353s 00:07:26.910 user 0m24.101s 00:07:26.910 sys 0m6.155s 00:07:26.910 18:58:18 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.910 18:58:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:26.910 ************************************ 00:07:26.910 END TEST json_config 00:07:26.910 ************************************ 00:07:26.910 18:58:18 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:26.910 18:58:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:26.910 18:58:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.910 18:58:18 -- common/autotest_common.sh@10 -- # set +x 00:07:26.910 ************************************ 00:07:26.910 START TEST json_config_extra_key 00:07:26.910 ************************************ 00:07:26.910 18:58:19 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:26.910 18:58:19 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:26.910 18:58:19 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:26.910 18:58:19 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.910 18:58:19 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.910 18:58:19 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.910 18:58:19 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.910 18:58:19 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.910 18:58:19 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.910 18:58:19 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.910 18:58:19 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.910 18:58:19 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.910 18:58:19 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.910 18:58:19 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:07:26.910 18:58:19 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:07:26.910 18:58:19 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.910 18:58:19 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.910 18:58:19 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:26.910 18:58:19 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.910 18:58:19 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:26.910 18:58:19 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.910 18:58:19 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.910 18:58:19 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.910 18:58:19 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.910 18:58:19 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.910 18:58:19 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.910 18:58:19 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:26.910 18:58:19 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.910 18:58:19 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:07:26.910 18:58:19 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:26.910 18:58:19 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:26.910 18:58:19 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.910 18:58:19 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.910 18:58:19 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.910 18:58:19 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:26.910 18:58:19 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:26.910 18:58:19 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:26.910 18:58:19 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:07:26.910 18:58:19 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:26.910 18:58:19 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:26.910 18:58:19 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:26.910 18:58:19 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:26.910 18:58:19 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:26.910 18:58:19 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:26.910 18:58:19 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:26.910 18:58:19 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:26.910 18:58:19 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:26.910 18:58:19 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:26.910 INFO: launching applications... 00:07:26.910 18:58:19 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:07:26.910 18:58:19 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:26.910 18:58:19 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:26.910 18:58:19 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:26.910 18:58:19 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:26.910 18:58:19 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:26.911 18:58:19 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:26.911 18:58:19 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:26.911 18:58:19 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=613690 00:07:26.911 18:58:19 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:26.911 Waiting for target to run... 00:07:26.911 18:58:19 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 613690 /var/tmp/spdk_tgt.sock 00:07:26.911 18:58:19 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 613690 ']' 00:07:26.911 18:58:19 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:07:26.911 18:58:19 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:26.911 18:58:19 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:26.911 18:58:19 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:26.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:26.911 18:58:19 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:26.911 18:58:19 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:26.911 [2024-07-25 18:58:19.148626] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:26.911 [2024-07-25 18:58:19.148672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid613690 ] 00:07:26.911 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.170 [2024-07-25 18:58:19.427814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.170 [2024-07-25 18:58:19.491452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.738 18:58:19 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.738 18:58:19 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:07:27.738 18:58:19 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:27.738 00:07:27.738 18:58:19 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:27.738 INFO: shutting down applications... 00:07:27.738 18:58:19 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:27.738 18:58:19 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:27.738 18:58:19 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:27.738 18:58:19 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 613690 ]] 00:07:27.738 18:58:19 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 613690 00:07:27.738 18:58:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:27.738 18:58:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:27.738 18:58:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 613690 00:07:27.738 18:58:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:28.306 18:58:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:28.306 18:58:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:28.306 18:58:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 613690 00:07:28.306 18:58:20 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:28.306 18:58:20 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:28.306 18:58:20 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:28.306 18:58:20 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:28.306 SPDK target shutdown done 00:07:28.306 18:58:20 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:28.306 Success 00:07:28.306 00:07:28.306 real 0m1.484s 00:07:28.306 user 0m1.283s 00:07:28.306 sys 0m0.384s 00:07:28.306 18:58:20 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.306 18:58:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:28.306 ************************************ 00:07:28.306 END TEST json_config_extra_key 00:07:28.306 ************************************ 00:07:28.306 18:58:20 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:28.306 18:58:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:28.306 18:58:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.306 18:58:20 -- common/autotest_common.sh@10 -- # set +x 00:07:28.306 ************************************ 00:07:28.306 START TEST alias_rpc 00:07:28.306 ************************************ 00:07:28.307 18:58:20 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:28.307 * Looking for test storage... 00:07:28.307 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:07:28.307 18:58:20 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:28.307 18:58:20 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=613971 00:07:28.307 18:58:20 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:28.307 18:58:20 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 613971 00:07:28.307 18:58:20 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 613971 ']' 00:07:28.307 18:58:20 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.307 18:58:20 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:28.307 18:58:20 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.307 18:58:20 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:28.307 18:58:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.307 [2024-07-25 18:58:20.695298] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:28.307 [2024-07-25 18:58:20.695346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid613971 ] 00:07:28.307 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.307 [2024-07-25 18:58:20.762734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.566 [2024-07-25 18:58:20.836905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.134 18:58:21 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:29.134 18:58:21 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:29.134 18:58:21 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:29.393 18:58:21 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 613971 00:07:29.393 18:58:21 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 613971 ']' 00:07:29.393 18:58:21 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 613971 00:07:29.393 18:58:21 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:07:29.393 18:58:21 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:29.393 18:58:21 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 613971 00:07:29.393 18:58:21 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:29.393 18:58:21 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:29.393 18:58:21 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 613971' 00:07:29.393 killing process with pid 613971 00:07:29.393 18:58:21 alias_rpc -- common/autotest_common.sh@969 -- # kill 613971 00:07:29.393 18:58:21 alias_rpc -- common/autotest_common.sh@974 -- # wait 613971 00:07:29.651 00:07:29.651 real 0m1.554s 00:07:29.651 user 0m1.746s 00:07:29.651 sys 0m0.419s 00:07:29.651 18:58:22 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.651 18:58:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.651 ************************************ 00:07:29.651 END TEST alias_rpc 00:07:29.651 ************************************ 00:07:29.910 18:58:22 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:07:29.910 18:58:22 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:29.910 18:58:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:29.910 18:58:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.910 18:58:22 -- common/autotest_common.sh@10 -- # set +x 00:07:29.910 ************************************ 00:07:29.910 START TEST spdkcli_tcp 00:07:29.910 ************************************ 00:07:29.910 18:58:22 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:29.910 * Looking for test storage... 00:07:29.910 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:07:29.910 18:58:22 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:07:29.910 18:58:22 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:29.910 18:58:22 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:07:29.910 18:58:22 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:29.910 18:58:22 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:29.910 18:58:22 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:29.910 18:58:22 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:29.910 18:58:22 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:29.910 18:58:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:29.910 18:58:22 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=614263 00:07:29.910 18:58:22 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 614263 00:07:29.910 18:58:22 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:29.910 18:58:22 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 614263 ']' 00:07:29.910 18:58:22 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.910 18:58:22 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:29.910 18:58:22 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.910 18:58:22 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:29.910 18:58:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:29.910 [2024-07-25 18:58:22.324736] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:29.911 [2024-07-25 18:58:22.324781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid614263 ] 00:07:29.911 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.169 [2024-07-25 18:58:22.393261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:30.169 [2024-07-25 18:58:22.471321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.169 [2024-07-25 18:58:22.471323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.738 18:58:23 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:30.738 18:58:23 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:07:30.738 18:58:23 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=614497 00:07:30.738 18:58:23 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:30.738 18:58:23 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:30.997 [ 00:07:30.997 "bdev_malloc_delete", 00:07:30.997 "bdev_malloc_create", 00:07:30.997 "bdev_null_resize", 00:07:30.997 "bdev_null_delete", 00:07:30.997 "bdev_null_create", 00:07:30.997 "bdev_nvme_cuse_unregister", 00:07:30.997 "bdev_nvme_cuse_register", 00:07:30.997 "bdev_opal_new_user", 00:07:30.997 "bdev_opal_set_lock_state", 00:07:30.997 "bdev_opal_delete", 00:07:30.997 "bdev_opal_get_info", 00:07:30.997 "bdev_opal_create", 00:07:30.997 "bdev_nvme_opal_revert", 00:07:30.997 "bdev_nvme_opal_init", 00:07:30.997 "bdev_nvme_send_cmd", 00:07:30.997 "bdev_nvme_get_path_iostat", 00:07:30.997 "bdev_nvme_get_mdns_discovery_info", 00:07:30.997 "bdev_nvme_stop_mdns_discovery", 00:07:30.997 "bdev_nvme_start_mdns_discovery", 00:07:30.997 "bdev_nvme_set_multipath_policy", 00:07:30.997 "bdev_nvme_set_preferred_path", 00:07:30.997 "bdev_nvme_get_io_paths", 00:07:30.997 "bdev_nvme_remove_error_injection", 00:07:30.997 "bdev_nvme_add_error_injection", 00:07:30.997 "bdev_nvme_get_discovery_info", 00:07:30.997 "bdev_nvme_stop_discovery", 00:07:30.997 "bdev_nvme_start_discovery", 00:07:30.997 "bdev_nvme_get_controller_health_info", 00:07:30.997 "bdev_nvme_disable_controller", 00:07:30.997 "bdev_nvme_enable_controller", 00:07:30.997 "bdev_nvme_reset_controller", 00:07:30.997 "bdev_nvme_get_transport_statistics", 00:07:30.997 "bdev_nvme_apply_firmware", 00:07:30.997 "bdev_nvme_detach_controller", 00:07:30.997 "bdev_nvme_get_controllers", 00:07:30.997 "bdev_nvme_attach_controller", 00:07:30.997 "bdev_nvme_set_hotplug", 00:07:30.997 "bdev_nvme_set_options", 00:07:30.997 "bdev_passthru_delete", 00:07:30.997 "bdev_passthru_create", 00:07:30.997 "bdev_lvol_set_parent_bdev", 00:07:30.997 "bdev_lvol_set_parent", 00:07:30.997 "bdev_lvol_check_shallow_copy", 00:07:30.997 "bdev_lvol_start_shallow_copy", 00:07:30.997 "bdev_lvol_grow_lvstore", 00:07:30.998 "bdev_lvol_get_lvols", 00:07:30.998 "bdev_lvol_get_lvstores", 00:07:30.998 "bdev_lvol_delete", 00:07:30.998 "bdev_lvol_set_read_only", 00:07:30.998 "bdev_lvol_resize", 00:07:30.998 "bdev_lvol_decouple_parent", 00:07:30.998 "bdev_lvol_inflate", 00:07:30.998 "bdev_lvol_rename", 00:07:30.998 "bdev_lvol_clone_bdev", 00:07:30.998 "bdev_lvol_clone", 00:07:30.998 "bdev_lvol_snapshot", 00:07:30.998 "bdev_lvol_create", 00:07:30.998 "bdev_lvol_delete_lvstore", 00:07:30.998 "bdev_lvol_rename_lvstore", 00:07:30.998 "bdev_lvol_create_lvstore", 00:07:30.998 "bdev_raid_set_options", 00:07:30.998 "bdev_raid_remove_base_bdev", 00:07:30.998 "bdev_raid_add_base_bdev", 00:07:30.998 "bdev_raid_delete", 00:07:30.998 "bdev_raid_create", 00:07:30.998 "bdev_raid_get_bdevs", 00:07:30.998 "bdev_error_inject_error", 00:07:30.998 "bdev_error_delete", 00:07:30.998 "bdev_error_create", 00:07:30.998 "bdev_split_delete", 00:07:30.998 "bdev_split_create", 00:07:30.998 "bdev_delay_delete", 00:07:30.998 "bdev_delay_create", 00:07:30.998 "bdev_delay_update_latency", 00:07:30.998 "bdev_zone_block_delete", 00:07:30.998 "bdev_zone_block_create", 00:07:30.998 "blobfs_create", 00:07:30.998 "blobfs_detect", 00:07:30.998 "blobfs_set_cache_size", 00:07:30.998 "bdev_aio_delete", 00:07:30.998 "bdev_aio_rescan", 00:07:30.998 "bdev_aio_create", 00:07:30.998 "bdev_ftl_set_property", 00:07:30.998 "bdev_ftl_get_properties", 00:07:30.998 "bdev_ftl_get_stats", 00:07:30.998 "bdev_ftl_unmap", 00:07:30.998 "bdev_ftl_unload", 00:07:30.998 "bdev_ftl_delete", 00:07:30.998 "bdev_ftl_load", 00:07:30.998 "bdev_ftl_create", 00:07:30.998 "bdev_virtio_attach_controller", 00:07:30.998 "bdev_virtio_scsi_get_devices", 00:07:30.998 "bdev_virtio_detach_controller", 00:07:30.998 "bdev_virtio_blk_set_hotplug", 00:07:30.998 "bdev_iscsi_delete", 00:07:30.998 "bdev_iscsi_create", 00:07:30.998 "bdev_iscsi_set_options", 00:07:30.998 "accel_error_inject_error", 00:07:30.998 "ioat_scan_accel_module", 00:07:30.998 "dsa_scan_accel_module", 00:07:30.998 "iaa_scan_accel_module", 00:07:30.998 "keyring_file_remove_key", 00:07:30.998 "keyring_file_add_key", 00:07:30.998 "keyring_linux_set_options", 00:07:30.998 "iscsi_get_histogram", 00:07:30.998 "iscsi_enable_histogram", 00:07:30.998 "iscsi_set_options", 00:07:30.998 "iscsi_get_auth_groups", 00:07:30.998 "iscsi_auth_group_remove_secret", 00:07:30.998 "iscsi_auth_group_add_secret", 00:07:30.998 "iscsi_delete_auth_group", 00:07:30.998 "iscsi_create_auth_group", 00:07:30.998 "iscsi_set_discovery_auth", 00:07:30.998 "iscsi_get_options", 00:07:30.998 "iscsi_target_node_request_logout", 00:07:30.998 "iscsi_target_node_set_redirect", 00:07:30.998 "iscsi_target_node_set_auth", 00:07:30.998 "iscsi_target_node_add_lun", 00:07:30.998 "iscsi_get_stats", 00:07:30.998 "iscsi_get_connections", 00:07:30.998 "iscsi_portal_group_set_auth", 00:07:30.998 "iscsi_start_portal_group", 00:07:30.998 "iscsi_delete_portal_group", 00:07:30.998 "iscsi_create_portal_group", 00:07:30.998 "iscsi_get_portal_groups", 00:07:30.998 "iscsi_delete_target_node", 00:07:30.998 "iscsi_target_node_remove_pg_ig_maps", 00:07:30.998 "iscsi_target_node_add_pg_ig_maps", 00:07:30.998 "iscsi_create_target_node", 00:07:30.998 "iscsi_get_target_nodes", 00:07:30.998 "iscsi_delete_initiator_group", 00:07:30.998 "iscsi_initiator_group_remove_initiators", 00:07:30.998 "iscsi_initiator_group_add_initiators", 00:07:30.998 "iscsi_create_initiator_group", 00:07:30.998 "iscsi_get_initiator_groups", 00:07:30.998 "nvmf_set_crdt", 00:07:30.998 "nvmf_set_config", 00:07:30.998 "nvmf_set_max_subsystems", 00:07:30.998 "nvmf_stop_mdns_prr", 00:07:30.998 "nvmf_publish_mdns_prr", 00:07:30.998 "nvmf_subsystem_get_listeners", 00:07:30.998 "nvmf_subsystem_get_qpairs", 00:07:30.998 "nvmf_subsystem_get_controllers", 00:07:30.998 "nvmf_get_stats", 00:07:30.998 "nvmf_get_transports", 00:07:30.998 "nvmf_create_transport", 00:07:30.998 "nvmf_get_targets", 00:07:30.998 "nvmf_delete_target", 00:07:30.998 "nvmf_create_target", 00:07:30.998 "nvmf_subsystem_allow_any_host", 00:07:30.998 "nvmf_subsystem_remove_host", 00:07:30.998 "nvmf_subsystem_add_host", 00:07:30.998 "nvmf_ns_remove_host", 00:07:30.998 "nvmf_ns_add_host", 00:07:30.998 "nvmf_subsystem_remove_ns", 00:07:30.998 "nvmf_subsystem_add_ns", 00:07:30.998 "nvmf_subsystem_listener_set_ana_state", 00:07:30.998 "nvmf_discovery_get_referrals", 00:07:30.998 "nvmf_discovery_remove_referral", 00:07:30.998 "nvmf_discovery_add_referral", 00:07:30.998 "nvmf_subsystem_remove_listener", 00:07:30.998 "nvmf_subsystem_add_listener", 00:07:30.998 "nvmf_delete_subsystem", 00:07:30.998 "nvmf_create_subsystem", 00:07:30.998 "nvmf_get_subsystems", 00:07:30.998 "env_dpdk_get_mem_stats", 00:07:30.998 "nbd_get_disks", 00:07:30.998 "nbd_stop_disk", 00:07:30.998 "nbd_start_disk", 00:07:30.998 "ublk_recover_disk", 00:07:30.998 "ublk_get_disks", 00:07:30.998 "ublk_stop_disk", 00:07:30.998 "ublk_start_disk", 00:07:30.998 "ublk_destroy_target", 00:07:30.998 "ublk_create_target", 00:07:30.998 "virtio_blk_create_transport", 00:07:30.998 "virtio_blk_get_transports", 00:07:30.998 "vhost_controller_set_coalescing", 00:07:30.998 "vhost_get_controllers", 00:07:30.998 "vhost_delete_controller", 00:07:30.998 "vhost_create_blk_controller", 00:07:30.998 "vhost_scsi_controller_remove_target", 00:07:30.998 "vhost_scsi_controller_add_target", 00:07:30.998 "vhost_start_scsi_controller", 00:07:30.998 "vhost_create_scsi_controller", 00:07:30.998 "thread_set_cpumask", 00:07:30.998 "framework_get_governor", 00:07:30.998 "framework_get_scheduler", 00:07:30.998 "framework_set_scheduler", 00:07:30.998 "framework_get_reactors", 00:07:30.998 "thread_get_io_channels", 00:07:30.998 "thread_get_pollers", 00:07:30.998 "thread_get_stats", 00:07:30.998 "framework_monitor_context_switch", 00:07:30.998 "spdk_kill_instance", 00:07:30.998 "log_enable_timestamps", 00:07:30.998 "log_get_flags", 00:07:30.998 "log_clear_flag", 00:07:30.998 "log_set_flag", 00:07:30.998 "log_get_level", 00:07:30.998 "log_set_level", 00:07:30.998 "log_get_print_level", 00:07:30.998 "log_set_print_level", 00:07:30.998 "framework_enable_cpumask_locks", 00:07:30.998 "framework_disable_cpumask_locks", 00:07:30.998 "framework_wait_init", 00:07:30.998 "framework_start_init", 00:07:30.998 "scsi_get_devices", 00:07:30.998 "bdev_get_histogram", 00:07:30.998 "bdev_enable_histogram", 00:07:30.998 "bdev_set_qos_limit", 00:07:30.998 "bdev_set_qd_sampling_period", 00:07:30.998 "bdev_get_bdevs", 00:07:30.998 "bdev_reset_iostat", 00:07:30.998 "bdev_get_iostat", 00:07:30.998 "bdev_examine", 00:07:30.998 "bdev_wait_for_examine", 00:07:30.998 "bdev_set_options", 00:07:30.998 "notify_get_notifications", 00:07:30.998 "notify_get_types", 00:07:30.998 "accel_get_stats", 00:07:30.998 "accel_set_options", 00:07:30.998 "accel_set_driver", 00:07:30.998 "accel_crypto_key_destroy", 00:07:30.998 "accel_crypto_keys_get", 00:07:30.998 "accel_crypto_key_create", 00:07:30.998 "accel_assign_opc", 00:07:30.998 "accel_get_module_info", 00:07:30.998 "accel_get_opc_assignments", 00:07:30.998 "vmd_rescan", 00:07:30.998 "vmd_remove_device", 00:07:30.998 "vmd_enable", 00:07:30.998 "sock_get_default_impl", 00:07:30.998 "sock_set_default_impl", 00:07:30.998 "sock_impl_set_options", 00:07:30.998 "sock_impl_get_options", 00:07:30.998 "iobuf_get_stats", 00:07:30.998 "iobuf_set_options", 00:07:30.998 "framework_get_pci_devices", 00:07:30.998 "framework_get_config", 00:07:30.998 "framework_get_subsystems", 00:07:30.998 "trace_get_info", 00:07:30.998 "trace_get_tpoint_group_mask", 00:07:30.998 "trace_disable_tpoint_group", 00:07:30.998 "trace_enable_tpoint_group", 00:07:30.998 "trace_clear_tpoint_mask", 00:07:30.998 "trace_set_tpoint_mask", 00:07:30.998 "keyring_get_keys", 00:07:30.998 "spdk_get_version", 00:07:30.998 "rpc_get_methods" 00:07:30.998 ] 00:07:30.998 18:58:23 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:30.998 18:58:23 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:30.998 18:58:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:30.998 18:58:23 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:30.998 18:58:23 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 614263 00:07:30.998 18:58:23 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 614263 ']' 00:07:30.998 18:58:23 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 614263 00:07:30.998 18:58:23 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:07:30.998 18:58:23 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:30.998 18:58:23 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 614263 00:07:30.998 18:58:23 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:30.998 18:58:23 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:30.998 18:58:23 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 614263' 00:07:30.998 killing process with pid 614263 00:07:30.998 18:58:23 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 614263 00:07:30.998 18:58:23 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 614263 00:07:31.568 00:07:31.568 real 0m1.567s 00:07:31.568 user 0m2.967s 00:07:31.568 sys 0m0.437s 00:07:31.568 18:58:23 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.568 18:58:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:31.568 ************************************ 00:07:31.568 END TEST spdkcli_tcp 00:07:31.568 ************************************ 00:07:31.568 18:58:23 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:31.568 18:58:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:31.568 18:58:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.568 18:58:23 -- common/autotest_common.sh@10 -- # set +x 00:07:31.568 ************************************ 00:07:31.568 START TEST dpdk_mem_utility 00:07:31.568 ************************************ 00:07:31.568 18:58:23 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:31.568 * Looking for test storage... 00:07:31.568 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:07:31.568 18:58:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:31.568 18:58:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=614570 00:07:31.568 18:58:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 614570 00:07:31.568 18:58:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:31.568 18:58:23 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 614570 ']' 00:07:31.568 18:58:23 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.568 18:58:23 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:31.568 18:58:23 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.568 18:58:23 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:31.568 18:58:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:31.568 [2024-07-25 18:58:23.947376] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:31.568 [2024-07-25 18:58:23.947432] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid614570 ] 00:07:31.568 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.568 [2024-07-25 18:58:24.002835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.827 [2024-07-25 18:58:24.080604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.396 18:58:24 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:32.396 18:58:24 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:07:32.396 18:58:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:32.396 18:58:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:32.396 18:58:24 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.396 18:58:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:32.396 { 00:07:32.396 "filename": "/tmp/spdk_mem_dump.txt" 00:07:32.396 } 00:07:32.396 18:58:24 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.396 18:58:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:32.396 DPDK memory size 814.000000 MiB in 1 heap(s) 00:07:32.396 1 heaps totaling size 814.000000 MiB 00:07:32.396 size: 814.000000 MiB heap id: 0 00:07:32.396 end heaps---------- 00:07:32.396 8 mempools totaling size 598.116089 MiB 00:07:32.396 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:32.396 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:32.396 size: 84.521057 MiB name: bdev_io_614570 00:07:32.396 size: 51.011292 MiB name: evtpool_614570 00:07:32.396 size: 50.003479 MiB name: msgpool_614570 00:07:32.396 size: 21.763794 MiB name: PDU_Pool 00:07:32.396 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:32.396 size: 0.026123 MiB name: Session_Pool 00:07:32.396 end mempools------- 00:07:32.396 6 memzones totaling size 4.142822 MiB 00:07:32.396 size: 1.000366 MiB name: RG_ring_0_614570 00:07:32.396 size: 1.000366 MiB name: RG_ring_1_614570 00:07:32.396 size: 1.000366 MiB name: RG_ring_4_614570 00:07:32.396 size: 1.000366 MiB name: RG_ring_5_614570 00:07:32.396 size: 0.125366 MiB name: RG_ring_2_614570 00:07:32.396 size: 0.015991 MiB name: RG_ring_3_614570 00:07:32.396 end memzones------- 00:07:32.396 18:58:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:32.656 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:07:32.656 list of free elements. size: 12.519348 MiB 00:07:32.656 element at address: 0x200000400000 with size: 1.999512 MiB 00:07:32.656 element at address: 0x200018e00000 with size: 0.999878 MiB 00:07:32.656 element at address: 0x200019000000 with size: 0.999878 MiB 00:07:32.656 element at address: 0x200003e00000 with size: 0.996277 MiB 00:07:32.656 element at address: 0x200031c00000 with size: 0.994446 MiB 00:07:32.656 element at address: 0x200013800000 with size: 0.978699 MiB 00:07:32.656 element at address: 0x200007000000 with size: 0.959839 MiB 00:07:32.656 element at address: 0x200019200000 with size: 0.936584 MiB 00:07:32.656 element at address: 0x200000200000 with size: 0.841614 MiB 00:07:32.656 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:07:32.656 element at address: 0x20000b200000 with size: 0.490723 MiB 00:07:32.656 element at address: 0x200000800000 with size: 0.487793 MiB 00:07:32.656 element at address: 0x200019400000 with size: 0.485657 MiB 00:07:32.656 element at address: 0x200027e00000 with size: 0.410034 MiB 00:07:32.656 element at address: 0x200003a00000 with size: 0.355530 MiB 00:07:32.656 list of standard malloc elements. size: 199.218079 MiB 00:07:32.656 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:07:32.656 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:07:32.656 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:32.656 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:07:32.656 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:32.656 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:32.656 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:07:32.656 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:32.656 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:07:32.656 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:07:32.656 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:07:32.656 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:07:32.656 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:07:32.656 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:07:32.656 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:32.656 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:32.656 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:07:32.656 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:07:32.656 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:07:32.656 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:07:32.656 element at address: 0x200003adb300 with size: 0.000183 MiB 00:07:32.656 element at address: 0x200003adb500 with size: 0.000183 MiB 00:07:32.656 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:07:32.656 element at address: 0x200003affa80 with size: 0.000183 MiB 00:07:32.656 element at address: 0x200003affb40 with size: 0.000183 MiB 00:07:32.656 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:07:32.656 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:07:32.656 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:07:32.656 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:07:32.656 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:07:32.656 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:07:32.656 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:07:32.656 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:07:32.656 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:07:32.656 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:07:32.656 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:07:32.656 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:07:32.656 element at address: 0x200027e69040 with size: 0.000183 MiB 00:07:32.656 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:07:32.656 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:07:32.656 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:07:32.656 list of memzone associated elements. size: 602.262573 MiB 00:07:32.656 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:07:32.656 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:32.656 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:07:32.656 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:32.656 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:07:32.656 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_614570_0 00:07:32.656 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:07:32.656 associated memzone info: size: 48.002930 MiB name: MP_evtpool_614570_0 00:07:32.656 element at address: 0x200003fff380 with size: 48.003052 MiB 00:07:32.656 associated memzone info: size: 48.002930 MiB name: MP_msgpool_614570_0 00:07:32.656 element at address: 0x2000195be940 with size: 20.255554 MiB 00:07:32.656 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:32.656 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:07:32.656 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:32.656 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:07:32.656 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_614570 00:07:32.656 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:07:32.656 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_614570 00:07:32.656 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:32.656 associated memzone info: size: 1.007996 MiB name: MP_evtpool_614570 00:07:32.656 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:07:32.656 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:32.656 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:07:32.656 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:32.656 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:07:32.656 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:32.656 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:07:32.656 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:32.656 element at address: 0x200003eff180 with size: 1.000488 MiB 00:07:32.656 associated memzone info: size: 1.000366 MiB name: RG_ring_0_614570 00:07:32.656 element at address: 0x200003affc00 with size: 1.000488 MiB 00:07:32.656 associated memzone info: size: 1.000366 MiB name: RG_ring_1_614570 00:07:32.657 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:07:32.657 associated memzone info: size: 1.000366 MiB name: RG_ring_4_614570 00:07:32.657 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:07:32.657 associated memzone info: size: 1.000366 MiB name: RG_ring_5_614570 00:07:32.657 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:07:32.657 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_614570 00:07:32.657 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:07:32.657 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:32.657 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:07:32.657 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:32.657 element at address: 0x20001947c540 with size: 0.250488 MiB 00:07:32.657 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:32.657 element at address: 0x200003adf880 with size: 0.125488 MiB 00:07:32.657 associated memzone info: size: 0.125366 MiB name: RG_ring_2_614570 00:07:32.657 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:07:32.657 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:32.657 element at address: 0x200027e69100 with size: 0.023743 MiB 00:07:32.657 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:32.657 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:07:32.657 associated memzone info: size: 0.015991 MiB name: RG_ring_3_614570 00:07:32.657 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:07:32.657 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:32.657 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:07:32.657 associated memzone info: size: 0.000183 MiB name: MP_msgpool_614570 00:07:32.657 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:07:32.657 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_614570 00:07:32.657 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:07:32.657 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:32.657 18:58:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:32.657 18:58:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 614570 00:07:32.657 18:58:24 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 614570 ']' 00:07:32.657 18:58:24 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 614570 00:07:32.657 18:58:24 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:07:32.657 18:58:24 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:32.657 18:58:24 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 614570 00:07:32.657 18:58:24 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:32.657 18:58:24 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:32.657 18:58:24 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 614570' 00:07:32.657 killing process with pid 614570 00:07:32.657 18:58:24 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 614570 00:07:32.657 18:58:24 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 614570 00:07:32.916 00:07:32.916 real 0m1.425s 00:07:32.916 user 0m1.543s 00:07:32.916 sys 0m0.384s 00:07:32.916 18:58:25 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.916 18:58:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:32.916 ************************************ 00:07:32.916 END TEST dpdk_mem_utility 00:07:32.916 ************************************ 00:07:32.916 18:58:25 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:07:32.916 18:58:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:32.916 18:58:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.916 18:58:25 -- common/autotest_common.sh@10 -- # set +x 00:07:32.916 ************************************ 00:07:32.916 START TEST event 00:07:32.916 ************************************ 00:07:32.916 18:58:25 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:07:33.175 * Looking for test storage... 00:07:33.175 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:07:33.175 18:58:25 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:33.175 18:58:25 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:33.175 18:58:25 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:33.175 18:58:25 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:33.175 18:58:25 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.175 18:58:25 event -- common/autotest_common.sh@10 -- # set +x 00:07:33.175 ************************************ 00:07:33.175 START TEST event_perf 00:07:33.175 ************************************ 00:07:33.175 18:58:25 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:33.175 Running I/O for 1 seconds...[2024-07-25 18:58:25.450514] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:33.175 [2024-07-25 18:58:25.450581] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid614924 ] 00:07:33.175 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.175 [2024-07-25 18:58:25.523720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:33.175 [2024-07-25 18:58:25.597101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.175 [2024-07-25 18:58:25.597210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.175 [2024-07-25 18:58:25.597316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.175 Running I/O for 1 seconds...[2024-07-25 18:58:25.597317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.554 00:07:34.554 lcore 0: 206280 00:07:34.554 lcore 1: 206277 00:07:34.554 lcore 2: 206278 00:07:34.554 lcore 3: 206279 00:07:34.554 done. 00:07:34.554 00:07:34.554 real 0m1.236s 00:07:34.554 user 0m4.143s 00:07:34.554 sys 0m0.090s 00:07:34.554 18:58:26 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.554 18:58:26 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:34.554 ************************************ 00:07:34.554 END TEST event_perf 00:07:34.554 ************************************ 00:07:34.554 18:58:26 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:34.554 18:58:26 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:34.554 18:58:26 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.554 18:58:26 event -- common/autotest_common.sh@10 -- # set +x 00:07:34.554 ************************************ 00:07:34.554 START TEST event_reactor 00:07:34.554 ************************************ 00:07:34.554 18:58:26 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:34.554 [2024-07-25 18:58:26.753544] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:34.554 [2024-07-25 18:58:26.753614] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid615131 ] 00:07:34.554 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.554 [2024-07-25 18:58:26.826362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.554 [2024-07-25 18:58:26.897386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.493 test_start 00:07:35.493 oneshot 00:07:35.493 tick 100 00:07:35.493 tick 100 00:07:35.493 tick 250 00:07:35.493 tick 100 00:07:35.493 tick 100 00:07:35.493 tick 250 00:07:35.493 tick 100 00:07:35.493 tick 500 00:07:35.493 tick 100 00:07:35.493 tick 100 00:07:35.493 tick 250 00:07:35.493 tick 100 00:07:35.493 tick 100 00:07:35.493 test_end 00:07:35.493 00:07:35.493 real 0m1.231s 00:07:35.493 user 0m1.141s 00:07:35.493 sys 0m0.086s 00:07:35.753 18:58:27 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.753 18:58:27 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:35.753 ************************************ 00:07:35.753 END TEST event_reactor 00:07:35.753 ************************************ 00:07:35.753 18:58:27 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:35.753 18:58:27 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:35.753 18:58:27 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.753 18:58:27 event -- common/autotest_common.sh@10 -- # set +x 00:07:35.753 ************************************ 00:07:35.753 START TEST event_reactor_perf 00:07:35.753 ************************************ 00:07:35.753 18:58:28 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:35.753 [2024-07-25 18:58:28.050265] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:35.753 [2024-07-25 18:58:28.050336] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid615375 ] 00:07:35.753 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.753 [2024-07-25 18:58:28.120359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.753 [2024-07-25 18:58:28.193254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.139 test_start 00:07:37.139 test_end 00:07:37.139 Performance: 508436 events per second 00:07:37.139 00:07:37.139 real 0m1.229s 00:07:37.139 user 0m1.139s 00:07:37.139 sys 0m0.085s 00:07:37.139 18:58:29 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.139 18:58:29 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:37.139 ************************************ 00:07:37.139 END TEST event_reactor_perf 00:07:37.139 ************************************ 00:07:37.139 18:58:29 event -- event/event.sh@49 -- # uname -s 00:07:37.139 18:58:29 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:37.139 18:58:29 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:37.139 18:58:29 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:37.139 18:58:29 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.139 18:58:29 event -- common/autotest_common.sh@10 -- # set +x 00:07:37.139 ************************************ 00:07:37.139 START TEST event_scheduler 00:07:37.139 ************************************ 00:07:37.139 18:58:29 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:37.139 * Looking for test storage... 00:07:37.139 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:07:37.139 18:58:29 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:37.139 18:58:29 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=615653 00:07:37.139 18:58:29 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:37.139 18:58:29 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:37.139 18:58:29 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 615653 00:07:37.139 18:58:29 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 615653 ']' 00:07:37.139 18:58:29 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.139 18:58:29 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.139 18:58:29 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.139 18:58:29 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.139 18:58:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:37.139 [2024-07-25 18:58:29.463573] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:37.139 [2024-07-25 18:58:29.463621] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid615653 ] 00:07:37.139 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.139 [2024-07-25 18:58:29.535874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:37.397 [2024-07-25 18:58:29.610883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.397 [2024-07-25 18:58:29.611022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.397 [2024-07-25 18:58:29.611034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.397 [2024-07-25 18:58:29.611040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.964 18:58:30 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.964 18:58:30 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:07:37.964 18:58:30 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:37.964 18:58:30 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.964 18:58:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:37.964 [2024-07-25 18:58:30.321589] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:37.964 [2024-07-25 18:58:30.321614] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:07:37.964 [2024-07-25 18:58:30.321625] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:37.964 [2024-07-25 18:58:30.321630] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:37.964 [2024-07-25 18:58:30.321635] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:37.964 18:58:30 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.964 18:58:30 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:37.964 18:58:30 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.964 18:58:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:37.964 [2024-07-25 18:58:30.393449] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:37.964 18:58:30 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.964 18:58:30 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:37.964 18:58:30 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:37.964 18:58:30 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.964 18:58:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:37.964 ************************************ 00:07:37.964 START TEST scheduler_create_thread 00:07:37.964 ************************************ 00:07:37.964 18:58:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:07:37.964 18:58:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:37.964 18:58:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.964 18:58:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:38.223 2 00:07:38.223 18:58:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.223 18:58:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:38.223 18:58:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.223 18:58:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:38.223 3 00:07:38.223 18:58:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.223 18:58:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:38.223 18:58:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.223 18:58:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:38.223 4 00:07:38.223 18:58:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.223 18:58:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:38.223 18:58:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.223 18:58:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:38.223 5 00:07:38.223 18:58:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.223 18:58:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:38.223 18:58:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.223 18:58:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:38.223 6 00:07:38.223 18:58:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.223 18:58:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:38.223 18:58:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.224 18:58:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:38.224 7 00:07:38.224 18:58:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.224 18:58:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:38.224 18:58:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.224 18:58:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:38.224 8 00:07:38.224 18:58:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.224 18:58:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:38.224 18:58:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.224 18:58:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:38.224 9 00:07:38.224 18:58:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.224 18:58:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:38.224 18:58:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.224 18:58:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:38.224 10 00:07:38.224 18:58:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.224 18:58:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:38.224 18:58:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.224 18:58:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:38.792 18:58:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.792 18:58:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:38.792 18:58:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:38.792 18:58:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.792 18:58:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:39.730 18:58:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.730 18:58:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:39.730 18:58:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.730 18:58:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:40.669 18:58:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.669 18:58:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:40.669 18:58:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:40.669 18:58:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.669 18:58:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:41.237 18:58:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.237 00:07:41.237 real 0m3.231s 00:07:41.237 user 0m0.026s 00:07:41.237 sys 0m0.004s 00:07:41.237 18:58:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.237 18:58:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:41.237 ************************************ 00:07:41.237 END TEST scheduler_create_thread 00:07:41.237 ************************************ 00:07:41.237 18:58:33 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:41.237 18:58:33 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 615653 00:07:41.237 18:58:33 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 615653 ']' 00:07:41.237 18:58:33 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 615653 00:07:41.237 18:58:33 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:07:41.237 18:58:33 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:41.237 18:58:33 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 615653 00:07:41.496 18:58:33 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:41.496 18:58:33 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:41.496 18:58:33 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 615653' 00:07:41.496 killing process with pid 615653 00:07:41.496 18:58:33 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 615653 00:07:41.496 18:58:33 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 615653 00:07:41.755 [2024-07-25 18:58:34.042517] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:42.015 00:07:42.015 real 0m4.964s 00:07:42.015 user 0m10.263s 00:07:42.015 sys 0m0.387s 00:07:42.015 18:58:34 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.015 18:58:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:42.015 ************************************ 00:07:42.015 END TEST event_scheduler 00:07:42.015 ************************************ 00:07:42.015 18:58:34 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:42.015 18:58:34 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:42.015 18:58:34 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:42.015 18:58:34 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.015 18:58:34 event -- common/autotest_common.sh@10 -- # set +x 00:07:42.015 ************************************ 00:07:42.015 START TEST app_repeat 00:07:42.015 ************************************ 00:07:42.015 18:58:34 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:07:42.015 18:58:34 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.015 18:58:34 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:42.015 18:58:34 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:42.015 18:58:34 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:42.015 18:58:34 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:42.015 18:58:34 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:42.015 18:58:34 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:42.015 18:58:34 event.app_repeat -- event/event.sh@19 -- # repeat_pid=616629 00:07:42.015 18:58:34 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:42.015 18:58:34 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:42.015 18:58:34 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 616629' 00:07:42.015 Process app_repeat pid: 616629 00:07:42.015 18:58:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:42.015 18:58:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:42.015 spdk_app_start Round 0 00:07:42.015 18:58:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 616629 /var/tmp/spdk-nbd.sock 00:07:42.015 18:58:34 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 616629 ']' 00:07:42.015 18:58:34 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:42.015 18:58:34 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:42.015 18:58:34 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:42.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:42.015 18:58:34 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:42.015 18:58:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:42.015 [2024-07-25 18:58:34.410826] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:42.015 [2024-07-25 18:58:34.410878] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid616629 ] 00:07:42.015 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.015 [2024-07-25 18:58:34.479757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:42.274 [2024-07-25 18:58:34.551505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.274 [2024-07-25 18:58:34.551505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.274 18:58:34 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.274 18:58:34 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:42.274 18:58:34 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:42.533 Malloc0 00:07:42.533 18:58:34 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:42.792 Malloc1 00:07:42.792 18:58:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:42.792 18:58:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.792 18:58:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:42.792 18:58:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:42.792 18:58:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:42.792 18:58:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:42.792 18:58:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:42.792 18:58:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.792 18:58:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:42.792 18:58:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:42.792 18:58:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:42.792 18:58:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:42.792 18:58:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:42.792 18:58:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:42.792 18:58:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:42.792 18:58:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:42.792 /dev/nbd0 00:07:43.052 18:58:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:43.052 18:58:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:43.052 18:58:35 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:43.052 18:58:35 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:43.052 18:58:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:43.052 18:58:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:43.052 18:58:35 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:43.052 18:58:35 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:43.052 18:58:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:43.052 18:58:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:43.052 18:58:35 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:43.052 1+0 records in 00:07:43.052 1+0 records out 00:07:43.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202296 s, 20.2 MB/s 00:07:43.052 18:58:35 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:43.052 18:58:35 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:43.052 18:58:35 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:43.052 18:58:35 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:43.052 18:58:35 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:43.052 18:58:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:43.052 18:58:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:43.052 18:58:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:43.052 /dev/nbd1 00:07:43.052 18:58:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:43.052 18:58:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:43.052 18:58:35 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:43.052 18:58:35 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:43.052 18:58:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:43.052 18:58:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:43.052 18:58:35 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:43.052 18:58:35 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:43.052 18:58:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:43.052 18:58:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:43.052 18:58:35 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:43.312 1+0 records in 00:07:43.312 1+0 records out 00:07:43.312 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292941 s, 14.0 MB/s 00:07:43.312 18:58:35 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:43.312 18:58:35 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:43.312 18:58:35 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:43.312 18:58:35 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:43.312 18:58:35 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:43.312 18:58:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:43.312 18:58:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:43.312 18:58:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:43.312 18:58:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.312 18:58:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:43.312 18:58:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:43.312 { 00:07:43.312 "nbd_device": "/dev/nbd0", 00:07:43.312 "bdev_name": "Malloc0" 00:07:43.312 }, 00:07:43.312 { 00:07:43.312 "nbd_device": "/dev/nbd1", 00:07:43.312 "bdev_name": "Malloc1" 00:07:43.312 } 00:07:43.312 ]' 00:07:43.312 18:58:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:43.312 { 00:07:43.312 "nbd_device": "/dev/nbd0", 00:07:43.312 "bdev_name": "Malloc0" 00:07:43.312 }, 00:07:43.312 { 00:07:43.312 "nbd_device": "/dev/nbd1", 00:07:43.312 "bdev_name": "Malloc1" 00:07:43.312 } 00:07:43.312 ]' 00:07:43.312 18:58:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:43.312 18:58:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:43.312 /dev/nbd1' 00:07:43.312 18:58:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:43.312 /dev/nbd1' 00:07:43.312 18:58:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:43.312 18:58:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:43.312 18:58:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:43.312 18:58:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:43.312 18:58:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:43.312 18:58:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:43.312 18:58:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:43.312 18:58:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:43.312 18:58:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:43.312 18:58:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:43.312 18:58:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:43.312 18:58:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:43.573 256+0 records in 00:07:43.573 256+0 records out 00:07:43.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00994563 s, 105 MB/s 00:07:43.573 18:58:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:43.573 18:58:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:43.573 256+0 records in 00:07:43.573 256+0 records out 00:07:43.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014469 s, 72.5 MB/s 00:07:43.573 18:58:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:43.573 18:58:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:43.573 256+0 records in 00:07:43.573 256+0 records out 00:07:43.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148876 s, 70.4 MB/s 00:07:43.573 18:58:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:43.573 18:58:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:43.573 18:58:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:43.573 18:58:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:43.573 18:58:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:43.573 18:58:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:43.573 18:58:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:43.573 18:58:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:43.573 18:58:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:43.573 18:58:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:43.573 18:58:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:43.573 18:58:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:43.573 18:58:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:43.573 18:58:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.573 18:58:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:43.573 18:58:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:43.573 18:58:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:43.573 18:58:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.573 18:58:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:43.833 18:58:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:43.833 18:58:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:43.833 18:58:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:43.833 18:58:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.833 18:58:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.833 18:58:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:43.833 18:58:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:43.833 18:58:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.833 18:58:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.833 18:58:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:43.833 18:58:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:43.833 18:58:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:43.833 18:58:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:43.833 18:58:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.833 18:58:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.833 18:58:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:43.833 18:58:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:43.833 18:58:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.833 18:58:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:43.833 18:58:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.833 18:58:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:44.092 18:58:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:44.092 18:58:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:44.092 18:58:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:44.092 18:58:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:44.092 18:58:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:44.092 18:58:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:44.092 18:58:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:44.092 18:58:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:44.092 18:58:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:44.092 18:58:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:44.092 18:58:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:44.092 18:58:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:44.092 18:58:36 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:44.352 18:58:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:44.610 [2024-07-25 18:58:36.935476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:44.610 [2024-07-25 18:58:37.001201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.610 [2024-07-25 18:58:37.001214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.610 [2024-07-25 18:58:37.042156] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:44.610 [2024-07-25 18:58:37.042194] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:47.900 18:58:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:47.901 18:58:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:47.901 spdk_app_start Round 1 00:07:47.901 18:58:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 616629 /var/tmp/spdk-nbd.sock 00:07:47.901 18:58:39 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 616629 ']' 00:07:47.901 18:58:39 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:47.901 18:58:39 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:47.901 18:58:39 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:47.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:47.901 18:58:39 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:47.901 18:58:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:47.901 18:58:39 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:47.901 18:58:39 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:47.901 18:58:39 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:47.901 Malloc0 00:07:47.901 18:58:40 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:47.901 Malloc1 00:07:48.160 18:58:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:48.160 18:58:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.160 18:58:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:48.160 18:58:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:48.160 18:58:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:48.160 18:58:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:48.160 18:58:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:48.160 18:58:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.160 18:58:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:48.160 18:58:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:48.160 18:58:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:48.160 18:58:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:48.160 18:58:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:48.160 18:58:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:48.160 18:58:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:48.160 18:58:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:48.160 /dev/nbd0 00:07:48.160 18:58:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:48.160 18:58:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:48.160 18:58:40 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:48.160 18:58:40 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:48.160 18:58:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:48.160 18:58:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:48.160 18:58:40 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:48.160 18:58:40 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:48.160 18:58:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:48.160 18:58:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:48.160 18:58:40 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:48.160 1+0 records in 00:07:48.160 1+0 records out 00:07:48.160 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018758 s, 21.8 MB/s 00:07:48.160 18:58:40 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:48.160 18:58:40 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:48.160 18:58:40 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:48.160 18:58:40 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:48.160 18:58:40 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:48.160 18:58:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.160 18:58:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:48.160 18:58:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:48.419 /dev/nbd1 00:07:48.419 18:58:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:48.419 18:58:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:48.419 18:58:40 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:48.419 18:58:40 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:48.419 18:58:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:48.419 18:58:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:48.419 18:58:40 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:48.419 18:58:40 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:48.419 18:58:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:48.419 18:58:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:48.419 18:58:40 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:48.419 1+0 records in 00:07:48.419 1+0 records out 00:07:48.419 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238144 s, 17.2 MB/s 00:07:48.419 18:58:40 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:48.419 18:58:40 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:48.419 18:58:40 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:48.419 18:58:40 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:48.419 18:58:40 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:48.419 18:58:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.419 18:58:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:48.419 18:58:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:48.419 18:58:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.419 18:58:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:48.686 18:58:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:48.686 { 00:07:48.686 "nbd_device": "/dev/nbd0", 00:07:48.686 "bdev_name": "Malloc0" 00:07:48.686 }, 00:07:48.686 { 00:07:48.686 "nbd_device": "/dev/nbd1", 00:07:48.686 "bdev_name": "Malloc1" 00:07:48.686 } 00:07:48.686 ]' 00:07:48.686 18:58:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:48.686 { 00:07:48.686 "nbd_device": "/dev/nbd0", 00:07:48.686 "bdev_name": "Malloc0" 00:07:48.686 }, 00:07:48.686 { 00:07:48.686 "nbd_device": "/dev/nbd1", 00:07:48.686 "bdev_name": "Malloc1" 00:07:48.686 } 00:07:48.686 ]' 00:07:48.686 18:58:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:48.686 18:58:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:48.686 /dev/nbd1' 00:07:48.686 18:58:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:48.686 /dev/nbd1' 00:07:48.686 18:58:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:48.686 18:58:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:48.686 18:58:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:48.686 18:58:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:48.686 18:58:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:48.686 18:58:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:48.686 18:58:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:48.686 18:58:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:48.686 18:58:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:48.686 18:58:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:48.686 18:58:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:48.686 18:58:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:48.686 256+0 records in 00:07:48.686 256+0 records out 00:07:48.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00444114 s, 236 MB/s 00:07:48.686 18:58:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:48.686 18:58:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:48.686 256+0 records in 00:07:48.686 256+0 records out 00:07:48.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143889 s, 72.9 MB/s 00:07:48.687 18:58:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:48.687 18:58:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:48.687 256+0 records in 00:07:48.687 256+0 records out 00:07:48.687 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150843 s, 69.5 MB/s 00:07:48.687 18:58:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:48.687 18:58:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:48.687 18:58:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:48.687 18:58:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:48.687 18:58:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:48.687 18:58:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:48.687 18:58:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:48.687 18:58:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:48.687 18:58:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:48.687 18:58:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:48.687 18:58:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:48.687 18:58:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:48.687 18:58:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:48.687 18:58:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.687 18:58:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:48.687 18:58:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:48.687 18:58:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:48.687 18:58:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.687 18:58:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:48.947 18:58:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:48.947 18:58:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:48.947 18:58:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:48.947 18:58:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.947 18:58:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.947 18:58:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:48.947 18:58:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:48.947 18:58:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.947 18:58:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.947 18:58:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:49.206 18:58:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:49.206 18:58:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:49.206 18:58:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:49.206 18:58:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.206 18:58:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.206 18:58:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:49.206 18:58:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:49.206 18:58:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.206 18:58:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:49.206 18:58:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.206 18:58:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:49.465 18:58:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:49.465 18:58:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:49.465 18:58:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:49.465 18:58:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:49.465 18:58:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:49.465 18:58:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:49.465 18:58:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:49.465 18:58:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:49.465 18:58:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:49.466 18:58:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:49.466 18:58:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:49.466 18:58:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:49.466 18:58:41 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:49.774 18:58:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:49.774 [2024-07-25 18:58:42.226945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:50.034 [2024-07-25 18:58:42.293572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.034 [2024-07-25 18:58:42.293573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.034 [2024-07-25 18:58:42.335355] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:50.034 [2024-07-25 18:58:42.335396] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:53.327 18:58:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:53.327 18:58:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:53.327 spdk_app_start Round 2 00:07:53.327 18:58:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 616629 /var/tmp/spdk-nbd.sock 00:07:53.328 18:58:45 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 616629 ']' 00:07:53.328 18:58:45 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:53.328 18:58:45 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:53.328 18:58:45 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:53.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:53.328 18:58:45 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:53.328 18:58:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:53.328 18:58:45 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:53.328 18:58:45 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:53.328 18:58:45 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:53.328 Malloc0 00:07:53.328 18:58:45 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:53.328 Malloc1 00:07:53.328 18:58:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:53.328 18:58:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.328 18:58:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:53.328 18:58:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:53.328 18:58:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:53.328 18:58:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:53.328 18:58:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:53.328 18:58:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.328 18:58:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:53.328 18:58:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:53.328 18:58:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:53.328 18:58:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:53.328 18:58:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:53.328 18:58:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:53.328 18:58:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:53.328 18:58:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:53.587 /dev/nbd0 00:07:53.587 18:58:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:53.587 18:58:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:53.587 18:58:45 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:53.587 18:58:45 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:53.587 18:58:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:53.587 18:58:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:53.587 18:58:45 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:53.587 18:58:45 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:53.587 18:58:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:53.587 18:58:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:53.587 18:58:45 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:53.587 1+0 records in 00:07:53.587 1+0 records out 00:07:53.587 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223165 s, 18.4 MB/s 00:07:53.587 18:58:45 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:53.587 18:58:45 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:53.587 18:58:45 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:53.587 18:58:45 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:53.587 18:58:45 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:53.587 18:58:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:53.587 18:58:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:53.587 18:58:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:53.847 /dev/nbd1 00:07:53.847 18:58:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:53.847 18:58:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:53.847 18:58:46 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:53.847 18:58:46 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:53.847 18:58:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:53.847 18:58:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:53.847 18:58:46 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:53.847 18:58:46 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:53.847 18:58:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:53.847 18:58:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:53.847 18:58:46 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:53.847 1+0 records in 00:07:53.847 1+0 records out 00:07:53.847 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229536 s, 17.8 MB/s 00:07:53.847 18:58:46 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:53.847 18:58:46 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:53.847 18:58:46 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:53.847 18:58:46 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:53.847 18:58:46 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:53.847 18:58:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:53.847 18:58:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:53.847 18:58:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:53.847 18:58:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.847 18:58:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:54.106 18:58:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:54.106 { 00:07:54.106 "nbd_device": "/dev/nbd0", 00:07:54.106 "bdev_name": "Malloc0" 00:07:54.106 }, 00:07:54.106 { 00:07:54.106 "nbd_device": "/dev/nbd1", 00:07:54.106 "bdev_name": "Malloc1" 00:07:54.106 } 00:07:54.106 ]' 00:07:54.106 18:58:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:54.106 { 00:07:54.106 "nbd_device": "/dev/nbd0", 00:07:54.106 "bdev_name": "Malloc0" 00:07:54.106 }, 00:07:54.106 { 00:07:54.106 "nbd_device": "/dev/nbd1", 00:07:54.106 "bdev_name": "Malloc1" 00:07:54.106 } 00:07:54.106 ]' 00:07:54.106 18:58:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:54.106 18:58:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:54.106 /dev/nbd1' 00:07:54.106 18:58:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:54.106 /dev/nbd1' 00:07:54.106 18:58:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:54.106 18:58:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:54.106 18:58:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:54.106 18:58:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:54.106 18:58:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:54.106 18:58:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:54.106 18:58:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:54.107 18:58:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:54.107 18:58:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:54.107 18:58:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:54.107 18:58:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:54.107 18:58:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:54.107 256+0 records in 00:07:54.107 256+0 records out 00:07:54.107 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106573 s, 98.4 MB/s 00:07:54.107 18:58:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:54.107 18:58:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:54.107 256+0 records in 00:07:54.107 256+0 records out 00:07:54.107 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143313 s, 73.2 MB/s 00:07:54.107 18:58:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:54.107 18:58:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:54.107 256+0 records in 00:07:54.107 256+0 records out 00:07:54.107 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150731 s, 69.6 MB/s 00:07:54.107 18:58:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:54.107 18:58:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:54.107 18:58:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:54.107 18:58:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:54.107 18:58:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:54.107 18:58:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:54.107 18:58:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:54.107 18:58:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:54.107 18:58:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:54.107 18:58:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:54.107 18:58:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:54.107 18:58:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:54.107 18:58:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:54.107 18:58:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:54.107 18:58:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:54.107 18:58:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:54.107 18:58:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:54.107 18:58:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.107 18:58:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:54.367 18:58:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:54.367 18:58:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:54.367 18:58:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:54.367 18:58:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.367 18:58:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.367 18:58:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:54.367 18:58:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:54.367 18:58:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.367 18:58:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.367 18:58:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:54.626 18:58:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:54.626 18:58:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:54.626 18:58:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:54.626 18:58:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.626 18:58:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.626 18:58:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:54.626 18:58:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:54.626 18:58:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.626 18:58:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:54.626 18:58:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:54.626 18:58:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:54.886 18:58:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:54.886 18:58:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:54.886 18:58:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:54.886 18:58:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:54.886 18:58:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:54.886 18:58:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:54.886 18:58:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:54.886 18:58:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:54.886 18:58:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:54.886 18:58:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:54.886 18:58:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:54.886 18:58:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:54.886 18:58:47 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:55.146 18:58:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:55.146 [2024-07-25 18:58:47.532941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:55.146 [2024-07-25 18:58:47.600615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.146 [2024-07-25 18:58:47.600616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.405 [2024-07-25 18:58:47.641518] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:55.405 [2024-07-25 18:58:47.641559] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:57.942 18:58:50 event.app_repeat -- event/event.sh@38 -- # waitforlisten 616629 /var/tmp/spdk-nbd.sock 00:07:57.942 18:58:50 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 616629 ']' 00:07:57.942 18:58:50 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:57.942 18:58:50 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:57.942 18:58:50 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:57.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:57.942 18:58:50 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:57.942 18:58:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:58.201 18:58:50 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:58.201 18:58:50 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:58.202 18:58:50 event.app_repeat -- event/event.sh@39 -- # killprocess 616629 00:07:58.202 18:58:50 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 616629 ']' 00:07:58.202 18:58:50 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 616629 00:07:58.202 18:58:50 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:58.202 18:58:50 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:58.202 18:58:50 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 616629 00:07:58.202 18:58:50 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:58.202 18:58:50 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:58.202 18:58:50 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 616629' 00:07:58.202 killing process with pid 616629 00:07:58.202 18:58:50 event.app_repeat -- common/autotest_common.sh@969 -- # kill 616629 00:07:58.202 18:58:50 event.app_repeat -- common/autotest_common.sh@974 -- # wait 616629 00:07:58.462 spdk_app_start is called in Round 0. 00:07:58.462 Shutdown signal received, stop current app iteration 00:07:58.462 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:07:58.462 spdk_app_start is called in Round 1. 00:07:58.462 Shutdown signal received, stop current app iteration 00:07:58.462 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:07:58.462 spdk_app_start is called in Round 2. 00:07:58.462 Shutdown signal received, stop current app iteration 00:07:58.462 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:07:58.462 spdk_app_start is called in Round 3. 00:07:58.462 Shutdown signal received, stop current app iteration 00:07:58.462 18:58:50 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:58.462 18:58:50 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:58.462 00:07:58.462 real 0m16.412s 00:07:58.462 user 0m35.557s 00:07:58.462 sys 0m2.423s 00:07:58.462 18:58:50 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.462 18:58:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:58.462 ************************************ 00:07:58.462 END TEST app_repeat 00:07:58.462 ************************************ 00:07:58.462 18:58:50 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:58.462 18:58:50 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:58.462 18:58:50 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:58.462 18:58:50 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.462 18:58:50 event -- common/autotest_common.sh@10 -- # set +x 00:07:58.462 ************************************ 00:07:58.462 START TEST cpu_locks 00:07:58.462 ************************************ 00:07:58.462 18:58:50 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:58.721 * Looking for test storage... 00:07:58.721 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:07:58.721 18:58:50 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:58.721 18:58:50 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:58.721 18:58:50 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:58.721 18:58:50 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:58.721 18:58:50 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:58.721 18:58:50 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.721 18:58:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:58.721 ************************************ 00:07:58.721 START TEST default_locks 00:07:58.721 ************************************ 00:07:58.721 18:58:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:58.721 18:58:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=619653 00:07:58.721 18:58:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 619653 00:07:58.721 18:58:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:58.721 18:58:50 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 619653 ']' 00:07:58.721 18:58:50 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.721 18:58:50 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:58.721 18:58:50 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.721 18:58:50 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:58.721 18:58:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:58.721 [2024-07-25 18:58:51.028119] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:58.721 [2024-07-25 18:58:51.028173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid619653 ] 00:07:58.721 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.721 [2024-07-25 18:58:51.094874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.721 [2024-07-25 18:58:51.164887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.657 18:58:51 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:59.657 18:58:51 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:59.657 18:58:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 619653 00:07:59.657 18:58:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 619653 00:07:59.657 18:58:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:59.917 lslocks: write error 00:07:59.917 18:58:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 619653 00:07:59.917 18:58:52 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 619653 ']' 00:07:59.917 18:58:52 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 619653 00:07:59.917 18:58:52 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:59.917 18:58:52 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:59.917 18:58:52 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 619653 00:08:00.176 18:58:52 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:00.176 18:58:52 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:00.176 18:58:52 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 619653' 00:08:00.176 killing process with pid 619653 00:08:00.176 18:58:52 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 619653 00:08:00.176 18:58:52 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 619653 00:08:00.436 18:58:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 619653 00:08:00.436 18:58:52 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:08:00.436 18:58:52 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 619653 00:08:00.436 18:58:52 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:00.436 18:58:52 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:00.436 18:58:52 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:00.436 18:58:52 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:00.436 18:58:52 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 619653 00:08:00.436 18:58:52 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 619653 ']' 00:08:00.436 18:58:52 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.436 18:58:52 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.436 18:58:52 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.436 18:58:52 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.436 18:58:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:00.436 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (619653) - No such process 00:08:00.436 ERROR: process (pid: 619653) is no longer running 00:08:00.436 18:58:52 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:00.436 18:58:52 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:08:00.436 18:58:52 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:08:00.436 18:58:52 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:00.436 18:58:52 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:00.436 18:58:52 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:00.436 18:58:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:00.436 18:58:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:00.436 18:58:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:00.436 18:58:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:00.436 00:08:00.436 real 0m1.737s 00:08:00.436 user 0m1.855s 00:08:00.436 sys 0m0.588s 00:08:00.436 18:58:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.436 18:58:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:00.436 ************************************ 00:08:00.436 END TEST default_locks 00:08:00.436 ************************************ 00:08:00.436 18:58:52 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:00.436 18:58:52 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:00.436 18:58:52 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.436 18:58:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:00.436 ************************************ 00:08:00.436 START TEST default_locks_via_rpc 00:08:00.436 ************************************ 00:08:00.436 18:58:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:08:00.436 18:58:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=619922 00:08:00.436 18:58:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 619922 00:08:00.436 18:58:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:00.436 18:58:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 619922 ']' 00:08:00.436 18:58:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.436 18:58:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.436 18:58:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.436 18:58:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.436 18:58:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.436 [2024-07-25 18:58:52.837598] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:00.436 [2024-07-25 18:58:52.837643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid619922 ] 00:08:00.436 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.436 [2024-07-25 18:58:52.906389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.696 [2024-07-25 18:58:52.972947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.265 18:58:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.265 18:58:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:01.265 18:58:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:01.265 18:58:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.265 18:58:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.265 18:58:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.265 18:58:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:01.265 18:58:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:01.265 18:58:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:01.265 18:58:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:01.265 18:58:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:01.265 18:58:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.265 18:58:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.265 18:58:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.265 18:58:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 619922 00:08:01.265 18:58:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 619922 00:08:01.265 18:58:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:01.524 18:58:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 619922 00:08:01.524 18:58:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 619922 ']' 00:08:01.524 18:58:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 619922 00:08:01.525 18:58:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:08:01.525 18:58:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:01.525 18:58:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 619922 00:08:01.525 18:58:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:01.525 18:58:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:01.525 18:58:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 619922' 00:08:01.525 killing process with pid 619922 00:08:01.525 18:58:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 619922 00:08:01.525 18:58:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 619922 00:08:01.784 00:08:01.784 real 0m1.453s 00:08:01.784 user 0m1.561s 00:08:01.784 sys 0m0.458s 00:08:01.784 18:58:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.784 18:58:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.784 ************************************ 00:08:01.784 END TEST default_locks_via_rpc 00:08:01.784 ************************************ 00:08:02.044 18:58:54 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:02.044 18:58:54 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:02.044 18:58:54 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.044 18:58:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:02.044 ************************************ 00:08:02.044 START TEST non_locking_app_on_locked_coremask 00:08:02.044 ************************************ 00:08:02.044 18:58:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:08:02.044 18:58:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=620188 00:08:02.044 18:58:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 620188 /var/tmp/spdk.sock 00:08:02.044 18:58:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:02.044 18:58:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 620188 ']' 00:08:02.044 18:58:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.044 18:58:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:02.044 18:58:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.044 18:58:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:02.044 18:58:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:02.044 [2024-07-25 18:58:54.356298] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:02.044 [2024-07-25 18:58:54.356337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid620188 ] 00:08:02.044 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.044 [2024-07-25 18:58:54.425044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.044 [2024-07-25 18:58:54.502228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.981 18:58:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:02.981 18:58:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:02.981 18:58:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=620419 00:08:02.981 18:58:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 620419 /var/tmp/spdk2.sock 00:08:02.981 18:58:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:02.981 18:58:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 620419 ']' 00:08:02.981 18:58:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:02.981 18:58:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:02.981 18:58:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:02.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:02.981 18:58:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:02.981 18:58:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:02.981 [2024-07-25 18:58:55.229627] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:02.981 [2024-07-25 18:58:55.229677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid620419 ] 00:08:02.981 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.981 [2024-07-25 18:58:55.300862] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:02.981 [2024-07-25 18:58:55.300887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.981 [2024-07-25 18:58:55.444365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.920 18:58:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:03.920 18:58:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:03.920 18:58:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 620188 00:08:03.920 18:58:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:03.920 18:58:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 620188 00:08:03.920 lslocks: write error 00:08:03.920 18:58:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 620188 00:08:03.920 18:58:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 620188 ']' 00:08:03.920 18:58:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 620188 00:08:03.920 18:58:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:03.920 18:58:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:03.920 18:58:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 620188 00:08:03.920 18:58:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:03.920 18:58:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:03.920 18:58:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 620188' 00:08:03.920 killing process with pid 620188 00:08:03.920 18:58:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 620188 00:08:03.920 18:58:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 620188 00:08:04.858 18:58:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 620419 00:08:04.858 18:58:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 620419 ']' 00:08:04.858 18:58:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 620419 00:08:04.858 18:58:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:04.858 18:58:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:04.858 18:58:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 620419 00:08:04.858 18:58:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:04.858 18:58:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:04.858 18:58:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 620419' 00:08:04.858 killing process with pid 620419 00:08:04.858 18:58:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 620419 00:08:04.858 18:58:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 620419 00:08:05.117 00:08:05.117 real 0m3.043s 00:08:05.117 user 0m3.342s 00:08:05.117 sys 0m0.813s 00:08:05.117 18:58:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.117 18:58:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:05.117 ************************************ 00:08:05.117 END TEST non_locking_app_on_locked_coremask 00:08:05.117 ************************************ 00:08:05.117 18:58:57 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:05.117 18:58:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:05.117 18:58:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.117 18:58:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:05.117 ************************************ 00:08:05.117 START TEST locking_app_on_unlocked_coremask 00:08:05.117 ************************************ 00:08:05.117 18:58:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:08:05.117 18:58:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=620816 00:08:05.117 18:58:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 620816 /var/tmp/spdk.sock 00:08:05.117 18:58:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:05.117 18:58:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 620816 ']' 00:08:05.117 18:58:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.117 18:58:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:05.117 18:58:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.117 18:58:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:05.117 18:58:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:05.117 [2024-07-25 18:58:57.469309] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:05.117 [2024-07-25 18:58:57.469351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid620816 ] 00:08:05.117 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.117 [2024-07-25 18:58:57.536532] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:05.117 [2024-07-25 18:58:57.536557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.375 [2024-07-25 18:58:57.614578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.943 18:58:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:05.943 18:58:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:05.943 18:58:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=620931 00:08:05.943 18:58:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 620931 /var/tmp/spdk2.sock 00:08:05.943 18:58:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:05.943 18:58:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 620931 ']' 00:08:05.943 18:58:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:05.943 18:58:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:05.943 18:58:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:05.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:05.943 18:58:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:05.943 18:58:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:05.943 [2024-07-25 18:58:58.349656] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:05.943 [2024-07-25 18:58:58.349709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid620931 ] 00:08:05.943 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.203 [2024-07-25 18:58:58.425512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.203 [2024-07-25 18:58:58.564163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.772 18:58:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:06.772 18:58:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:06.772 18:58:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 620931 00:08:06.772 18:58:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 620931 00:08:06.772 18:58:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:07.709 lslocks: write error 00:08:07.709 18:58:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 620816 00:08:07.709 18:58:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 620816 ']' 00:08:07.709 18:58:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 620816 00:08:07.709 18:58:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:07.709 18:58:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:07.709 18:58:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 620816 00:08:07.709 18:58:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:07.709 18:58:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:07.709 18:58:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 620816' 00:08:07.709 killing process with pid 620816 00:08:07.709 18:58:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 620816 00:08:07.709 18:58:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 620816 00:08:08.278 18:59:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 620931 00:08:08.278 18:59:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 620931 ']' 00:08:08.278 18:59:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 620931 00:08:08.278 18:59:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:08.278 18:59:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:08.278 18:59:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 620931 00:08:08.278 18:59:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:08.278 18:59:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:08.278 18:59:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 620931' 00:08:08.278 killing process with pid 620931 00:08:08.278 18:59:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 620931 00:08:08.278 18:59:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 620931 00:08:08.536 00:08:08.536 real 0m3.452s 00:08:08.536 user 0m3.737s 00:08:08.536 sys 0m0.983s 00:08:08.536 18:59:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.536 18:59:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:08.536 ************************************ 00:08:08.536 END TEST locking_app_on_unlocked_coremask 00:08:08.536 ************************************ 00:08:08.536 18:59:00 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:08.537 18:59:00 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:08.537 18:59:00 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.537 18:59:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:08.537 ************************************ 00:08:08.537 START TEST locking_app_on_locked_coremask 00:08:08.537 ************************************ 00:08:08.537 18:59:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:08:08.537 18:59:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=621432 00:08:08.537 18:59:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 621432 /var/tmp/spdk.sock 00:08:08.537 18:59:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:08.537 18:59:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 621432 ']' 00:08:08.537 18:59:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.537 18:59:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:08.537 18:59:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.537 18:59:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:08.537 18:59:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:08.537 [2024-07-25 18:59:00.991167] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:08.537 [2024-07-25 18:59:00.991209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid621432 ] 00:08:08.796 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.796 [2024-07-25 18:59:01.057065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.796 [2024-07-25 18:59:01.123721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.364 18:59:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:09.364 18:59:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:09.364 18:59:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=621663 00:08:09.364 18:59:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 621663 /var/tmp/spdk2.sock 00:08:09.364 18:59:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:09.364 18:59:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:09.364 18:59:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 621663 /var/tmp/spdk2.sock 00:08:09.364 18:59:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:09.364 18:59:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.364 18:59:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:09.364 18:59:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.364 18:59:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 621663 /var/tmp/spdk2.sock 00:08:09.364 18:59:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 621663 ']' 00:08:09.364 18:59:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:09.364 18:59:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:09.364 18:59:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:09.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:09.364 18:59:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:09.364 18:59:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:09.623 [2024-07-25 18:59:01.870844] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:09.623 [2024-07-25 18:59:01.870891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid621663 ] 00:08:09.623 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.623 [2024-07-25 18:59:01.947161] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 621432 has claimed it. 00:08:09.623 [2024-07-25 18:59:01.947193] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:10.190 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (621663) - No such process 00:08:10.190 ERROR: process (pid: 621663) is no longer running 00:08:10.190 18:59:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.190 18:59:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:10.190 18:59:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:10.190 18:59:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:10.190 18:59:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:10.190 18:59:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:10.190 18:59:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 621432 00:08:10.190 18:59:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 621432 00:08:10.190 18:59:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:10.448 lslocks: write error 00:08:10.448 18:59:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 621432 00:08:10.448 18:59:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 621432 ']' 00:08:10.448 18:59:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 621432 00:08:10.448 18:59:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:10.448 18:59:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:10.448 18:59:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 621432 00:08:10.707 18:59:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:10.707 18:59:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:10.707 18:59:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 621432' 00:08:10.707 killing process with pid 621432 00:08:10.707 18:59:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 621432 00:08:10.707 18:59:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 621432 00:08:10.966 00:08:10.966 real 0m2.313s 00:08:10.966 user 0m2.577s 00:08:10.966 sys 0m0.637s 00:08:10.966 18:59:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.966 18:59:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:10.966 ************************************ 00:08:10.966 END TEST locking_app_on_locked_coremask 00:08:10.966 ************************************ 00:08:10.966 18:59:03 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:10.966 18:59:03 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:10.966 18:59:03 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.966 18:59:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:10.966 ************************************ 00:08:10.966 START TEST locking_overlapped_coremask 00:08:10.966 ************************************ 00:08:10.966 18:59:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:08:10.966 18:59:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=621924 00:08:10.966 18:59:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 621924 /var/tmp/spdk.sock 00:08:10.966 18:59:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:08:10.966 18:59:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 621924 ']' 00:08:10.966 18:59:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.966 18:59:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:10.966 18:59:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.966 18:59:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:10.966 18:59:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:10.966 [2024-07-25 18:59:03.376837] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:10.966 [2024-07-25 18:59:03.376880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid621924 ] 00:08:10.966 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.226 [2024-07-25 18:59:03.444763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:11.226 [2024-07-25 18:59:03.514758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.227 [2024-07-25 18:59:03.514865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.227 [2024-07-25 18:59:03.514866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:11.793 18:59:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:11.793 18:59:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:11.793 18:59:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=621968 00:08:11.793 18:59:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:11.793 18:59:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 621968 /var/tmp/spdk2.sock 00:08:11.793 18:59:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:11.793 18:59:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 621968 /var/tmp/spdk2.sock 00:08:11.793 18:59:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:11.793 18:59:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.793 18:59:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:11.793 18:59:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.793 18:59:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 621968 /var/tmp/spdk2.sock 00:08:11.794 18:59:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 621968 ']' 00:08:11.794 18:59:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:11.794 18:59:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:11.794 18:59:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:11.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:11.794 18:59:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:11.794 18:59:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:11.794 [2024-07-25 18:59:04.257530] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:11.794 [2024-07-25 18:59:04.257578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid621968 ] 00:08:12.053 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.053 [2024-07-25 18:59:04.335908] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 621924 has claimed it. 00:08:12.053 [2024-07-25 18:59:04.335950] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:12.621 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (621968) - No such process 00:08:12.621 ERROR: process (pid: 621968) is no longer running 00:08:12.621 18:59:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:12.621 18:59:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:12.621 18:59:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:12.621 18:59:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:12.621 18:59:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:12.621 18:59:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:12.621 18:59:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:12.621 18:59:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:12.621 18:59:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:12.621 18:59:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:12.621 18:59:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 621924 00:08:12.621 18:59:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 621924 ']' 00:08:12.621 18:59:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 621924 00:08:12.621 18:59:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:08:12.621 18:59:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:12.621 18:59:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 621924 00:08:12.621 18:59:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:12.621 18:59:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:12.621 18:59:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 621924' 00:08:12.621 killing process with pid 621924 00:08:12.621 18:59:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 621924 00:08:12.621 18:59:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 621924 00:08:12.880 00:08:12.880 real 0m1.946s 00:08:12.880 user 0m5.528s 00:08:12.880 sys 0m0.432s 00:08:12.880 18:59:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:12.880 18:59:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:12.880 ************************************ 00:08:12.880 END TEST locking_overlapped_coremask 00:08:12.880 ************************************ 00:08:12.880 18:59:05 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:12.880 18:59:05 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:12.880 18:59:05 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:12.880 18:59:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:12.880 ************************************ 00:08:12.880 START TEST locking_overlapped_coremask_via_rpc 00:08:12.880 ************************************ 00:08:12.880 18:59:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:08:12.880 18:59:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=622201 00:08:12.880 18:59:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 622201 /var/tmp/spdk.sock 00:08:12.880 18:59:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:12.880 18:59:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 622201 ']' 00:08:12.880 18:59:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.880 18:59:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:12.880 18:59:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.880 18:59:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:12.880 18:59:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.138 [2024-07-25 18:59:05.392197] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:13.138 [2024-07-25 18:59:05.392240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid622201 ] 00:08:13.138 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.138 [2024-07-25 18:59:05.461423] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:13.138 [2024-07-25 18:59:05.461448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:13.138 [2024-07-25 18:59:05.540684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.138 [2024-07-25 18:59:05.540790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.138 [2024-07-25 18:59:05.540791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.071 18:59:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:14.071 18:59:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:14.071 18:59:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=622436 00:08:14.071 18:59:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 622436 /var/tmp/spdk2.sock 00:08:14.071 18:59:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:14.071 18:59:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 622436 ']' 00:08:14.071 18:59:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:14.071 18:59:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:14.071 18:59:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:14.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:14.071 18:59:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:14.071 18:59:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.071 [2024-07-25 18:59:06.278319] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:14.071 [2024-07-25 18:59:06.278363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid622436 ] 00:08:14.071 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.071 [2024-07-25 18:59:06.355508] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:14.071 [2024-07-25 18:59:06.355532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:14.071 [2024-07-25 18:59:06.500354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:14.071 [2024-07-25 18:59:06.500471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:14.071 [2024-07-25 18:59:06.500470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.669 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:14.669 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:14.669 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:14.669 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.669 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.669 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.669 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:14.669 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:14.669 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:14.669 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:14.669 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.669 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:14.669 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.669 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:14.669 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.669 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.669 [2024-07-25 18:59:07.118976] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 622201 has claimed it. 00:08:14.669 request: 00:08:14.669 { 00:08:14.669 "method": "framework_enable_cpumask_locks", 00:08:14.669 "req_id": 1 00:08:14.669 } 00:08:14.669 Got JSON-RPC error response 00:08:14.669 response: 00:08:14.669 { 00:08:14.669 "code": -32603, 00:08:14.669 "message": "Failed to claim CPU core: 2" 00:08:14.669 } 00:08:14.669 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:14.669 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:14.669 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:14.669 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:14.669 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:14.669 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 622201 /var/tmp/spdk.sock 00:08:14.669 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 622201 ']' 00:08:14.669 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.669 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:14.669 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.669 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:14.669 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.928 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:14.928 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:14.928 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 622436 /var/tmp/spdk2.sock 00:08:14.928 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 622436 ']' 00:08:14.928 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:14.928 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:14.928 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:14.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:14.928 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:14.928 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.187 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:15.187 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:15.187 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:15.187 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:15.187 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:15.187 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:15.187 00:08:15.187 real 0m2.193s 00:08:15.187 user 0m0.962s 00:08:15.187 sys 0m0.169s 00:08:15.187 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:15.187 18:59:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.187 ************************************ 00:08:15.187 END TEST locking_overlapped_coremask_via_rpc 00:08:15.187 ************************************ 00:08:15.187 18:59:07 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:15.187 18:59:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 622201 ]] 00:08:15.187 18:59:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 622201 00:08:15.187 18:59:07 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 622201 ']' 00:08:15.187 18:59:07 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 622201 00:08:15.187 18:59:07 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:15.187 18:59:07 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:15.187 18:59:07 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 622201 00:08:15.187 18:59:07 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:15.187 18:59:07 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:15.187 18:59:07 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 622201' 00:08:15.187 killing process with pid 622201 00:08:15.187 18:59:07 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 622201 00:08:15.187 18:59:07 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 622201 00:08:15.755 18:59:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 622436 ]] 00:08:15.755 18:59:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 622436 00:08:15.755 18:59:07 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 622436 ']' 00:08:15.755 18:59:07 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 622436 00:08:15.755 18:59:07 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:15.755 18:59:07 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:15.755 18:59:07 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 622436 00:08:15.755 18:59:07 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:08:15.755 18:59:07 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:08:15.755 18:59:07 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 622436' 00:08:15.755 killing process with pid 622436 00:08:15.755 18:59:07 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 622436 00:08:15.755 18:59:07 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 622436 00:08:16.014 18:59:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:16.014 18:59:08 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:16.014 18:59:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 622201 ]] 00:08:16.014 18:59:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 622201 00:08:16.014 18:59:08 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 622201 ']' 00:08:16.014 18:59:08 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 622201 00:08:16.014 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (622201) - No such process 00:08:16.014 18:59:08 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 622201 is not found' 00:08:16.014 Process with pid 622201 is not found 00:08:16.014 18:59:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 622436 ]] 00:08:16.014 18:59:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 622436 00:08:16.014 18:59:08 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 622436 ']' 00:08:16.014 18:59:08 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 622436 00:08:16.014 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (622436) - No such process 00:08:16.014 18:59:08 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 622436 is not found' 00:08:16.014 Process with pid 622436 is not found 00:08:16.014 18:59:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:16.014 00:08:16.014 real 0m17.451s 00:08:16.014 user 0m30.476s 00:08:16.014 sys 0m5.002s 00:08:16.014 18:59:08 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.014 18:59:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:16.014 ************************************ 00:08:16.014 END TEST cpu_locks 00:08:16.014 ************************************ 00:08:16.014 00:08:16.014 real 0m43.036s 00:08:16.014 user 1m22.906s 00:08:16.014 sys 0m8.433s 00:08:16.014 18:59:08 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.014 18:59:08 event -- common/autotest_common.sh@10 -- # set +x 00:08:16.014 ************************************ 00:08:16.014 END TEST event 00:08:16.014 ************************************ 00:08:16.014 18:59:08 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:08:16.014 18:59:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:16.014 18:59:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.014 18:59:08 -- common/autotest_common.sh@10 -- # set +x 00:08:16.014 ************************************ 00:08:16.014 START TEST thread 00:08:16.014 ************************************ 00:08:16.014 18:59:08 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:08:16.273 * Looking for test storage... 00:08:16.273 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:08:16.273 18:59:08 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:16.273 18:59:08 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:16.273 18:59:08 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.273 18:59:08 thread -- common/autotest_common.sh@10 -- # set +x 00:08:16.273 ************************************ 00:08:16.273 START TEST thread_poller_perf 00:08:16.273 ************************************ 00:08:16.273 18:59:08 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:16.273 [2024-07-25 18:59:08.558039] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:16.273 [2024-07-25 18:59:08.558113] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid622892 ] 00:08:16.273 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.273 [2024-07-25 18:59:08.630873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.273 [2024-07-25 18:59:08.703140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.273 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:17.650 ====================================== 00:08:17.650 busy:2310257226 (cyc) 00:08:17.650 total_run_count: 408000 00:08:17.650 tsc_hz: 2300000000 (cyc) 00:08:17.650 ====================================== 00:08:17.650 poller_cost: 5662 (cyc), 2461 (nsec) 00:08:17.650 00:08:17.650 real 0m1.245s 00:08:17.650 user 0m1.157s 00:08:17.650 sys 0m0.084s 00:08:17.650 18:59:09 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.650 18:59:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:17.650 ************************************ 00:08:17.650 END TEST thread_poller_perf 00:08:17.650 ************************************ 00:08:17.650 18:59:09 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:17.650 18:59:09 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:17.650 18:59:09 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.650 18:59:09 thread -- common/autotest_common.sh@10 -- # set +x 00:08:17.650 ************************************ 00:08:17.650 START TEST thread_poller_perf 00:08:17.650 ************************************ 00:08:17.650 18:59:09 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:17.650 [2024-07-25 18:59:09.870135] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:17.651 [2024-07-25 18:59:09.870201] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid623089 ] 00:08:17.651 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.651 [2024-07-25 18:59:09.925614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.651 [2024-07-25 18:59:09.998820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.651 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:18.613 ====================================== 00:08:18.613 busy:2301592266 (cyc) 00:08:18.613 total_run_count: 5256000 00:08:18.613 tsc_hz: 2300000000 (cyc) 00:08:18.613 ====================================== 00:08:18.613 poller_cost: 437 (cyc), 190 (nsec) 00:08:18.613 00:08:18.613 real 0m1.221s 00:08:18.613 user 0m1.149s 00:08:18.613 sys 0m0.068s 00:08:18.613 18:59:11 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.613 18:59:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:18.613 ************************************ 00:08:18.613 END TEST thread_poller_perf 00:08:18.613 ************************************ 00:08:18.871 18:59:11 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:18.871 00:08:18.871 real 0m2.693s 00:08:18.871 user 0m2.384s 00:08:18.871 sys 0m0.317s 00:08:18.871 18:59:11 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.871 18:59:11 thread -- common/autotest_common.sh@10 -- # set +x 00:08:18.871 ************************************ 00:08:18.871 END TEST thread 00:08:18.871 ************************************ 00:08:18.871 18:59:11 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:08:18.871 18:59:11 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:08:18.871 18:59:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:18.871 18:59:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.871 18:59:11 -- common/autotest_common.sh@10 -- # set +x 00:08:18.871 ************************************ 00:08:18.871 START TEST app_cmdline 00:08:18.871 ************************************ 00:08:18.871 18:59:11 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:08:18.871 * Looking for test storage... 00:08:18.871 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:18.871 18:59:11 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:18.871 18:59:11 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=623389 00:08:18.871 18:59:11 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 623389 00:08:18.871 18:59:11 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:18.871 18:59:11 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 623389 ']' 00:08:18.871 18:59:11 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.871 18:59:11 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:18.871 18:59:11 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.871 18:59:11 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:18.871 18:59:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:18.871 [2024-07-25 18:59:11.313888] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:18.871 [2024-07-25 18:59:11.313955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid623389 ] 00:08:18.871 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.130 [2024-07-25 18:59:11.383179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.130 [2024-07-25 18:59:11.461800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.697 18:59:12 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:19.697 18:59:12 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:08:19.697 18:59:12 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:19.957 { 00:08:19.957 "version": "SPDK v24.09-pre git sha1 704257090", 00:08:19.957 "fields": { 00:08:19.957 "major": 24, 00:08:19.957 "minor": 9, 00:08:19.957 "patch": 0, 00:08:19.957 "suffix": "-pre", 00:08:19.957 "commit": "704257090" 00:08:19.957 } 00:08:19.957 } 00:08:19.957 18:59:12 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:19.957 18:59:12 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:19.957 18:59:12 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:19.957 18:59:12 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:19.957 18:59:12 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:19.957 18:59:12 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:19.957 18:59:12 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:19.957 18:59:12 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.957 18:59:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:19.957 18:59:12 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.957 18:59:12 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:19.957 18:59:12 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:19.957 18:59:12 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:19.957 18:59:12 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:19.957 18:59:12 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:19.957 18:59:12 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:19.957 18:59:12 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.957 18:59:12 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:19.957 18:59:12 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.957 18:59:12 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:19.957 18:59:12 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.957 18:59:12 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:19.957 18:59:12 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:08:19.957 18:59:12 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:20.224 request: 00:08:20.224 { 00:08:20.224 "method": "env_dpdk_get_mem_stats", 00:08:20.224 "req_id": 1 00:08:20.224 } 00:08:20.224 Got JSON-RPC error response 00:08:20.224 response: 00:08:20.224 { 00:08:20.224 "code": -32601, 00:08:20.224 "message": "Method not found" 00:08:20.224 } 00:08:20.224 18:59:12 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:20.224 18:59:12 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:20.224 18:59:12 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:20.224 18:59:12 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:20.224 18:59:12 app_cmdline -- app/cmdline.sh@1 -- # killprocess 623389 00:08:20.224 18:59:12 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 623389 ']' 00:08:20.224 18:59:12 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 623389 00:08:20.224 18:59:12 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:08:20.224 18:59:12 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:20.224 18:59:12 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 623389 00:08:20.224 18:59:12 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:20.224 18:59:12 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:20.224 18:59:12 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 623389' 00:08:20.224 killing process with pid 623389 00:08:20.224 18:59:12 app_cmdline -- common/autotest_common.sh@969 -- # kill 623389 00:08:20.224 18:59:12 app_cmdline -- common/autotest_common.sh@974 -- # wait 623389 00:08:20.483 00:08:20.483 real 0m1.738s 00:08:20.483 user 0m2.105s 00:08:20.483 sys 0m0.451s 00:08:20.483 18:59:12 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.483 18:59:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:20.483 ************************************ 00:08:20.483 END TEST app_cmdline 00:08:20.483 ************************************ 00:08:20.483 18:59:12 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:08:20.483 18:59:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:20.483 18:59:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.483 18:59:12 -- common/autotest_common.sh@10 -- # set +x 00:08:20.742 ************************************ 00:08:20.742 START TEST version 00:08:20.743 ************************************ 00:08:20.743 18:59:12 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:08:20.743 * Looking for test storage... 00:08:20.743 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:20.743 18:59:13 version -- app/version.sh@17 -- # get_header_version major 00:08:20.743 18:59:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:20.743 18:59:13 version -- app/version.sh@14 -- # cut -f2 00:08:20.743 18:59:13 version -- app/version.sh@14 -- # tr -d '"' 00:08:20.743 18:59:13 version -- app/version.sh@17 -- # major=24 00:08:20.743 18:59:13 version -- app/version.sh@18 -- # get_header_version minor 00:08:20.743 18:59:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:20.743 18:59:13 version -- app/version.sh@14 -- # cut -f2 00:08:20.743 18:59:13 version -- app/version.sh@14 -- # tr -d '"' 00:08:20.743 18:59:13 version -- app/version.sh@18 -- # minor=9 00:08:20.743 18:59:13 version -- app/version.sh@19 -- # get_header_version patch 00:08:20.743 18:59:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:20.743 18:59:13 version -- app/version.sh@14 -- # cut -f2 00:08:20.743 18:59:13 version -- app/version.sh@14 -- # tr -d '"' 00:08:20.743 18:59:13 version -- app/version.sh@19 -- # patch=0 00:08:20.743 18:59:13 version -- app/version.sh@20 -- # get_header_version suffix 00:08:20.743 18:59:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:20.743 18:59:13 version -- app/version.sh@14 -- # cut -f2 00:08:20.743 18:59:13 version -- app/version.sh@14 -- # tr -d '"' 00:08:20.743 18:59:13 version -- app/version.sh@20 -- # suffix=-pre 00:08:20.743 18:59:13 version -- app/version.sh@22 -- # version=24.9 00:08:20.743 18:59:13 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:20.743 18:59:13 version -- app/version.sh@28 -- # version=24.9rc0 00:08:20.743 18:59:13 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:20.743 18:59:13 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:20.743 18:59:13 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:20.743 18:59:13 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:20.743 00:08:20.743 real 0m0.154s 00:08:20.743 user 0m0.078s 00:08:20.743 sys 0m0.113s 00:08:20.743 18:59:13 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.743 18:59:13 version -- common/autotest_common.sh@10 -- # set +x 00:08:20.743 ************************************ 00:08:20.743 END TEST version 00:08:20.743 ************************************ 00:08:20.743 18:59:13 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:08:20.743 18:59:13 -- spdk/autotest.sh@202 -- # uname -s 00:08:20.743 18:59:13 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:08:20.743 18:59:13 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:08:20.743 18:59:13 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:08:20.743 18:59:13 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:08:20.743 18:59:13 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:08:20.743 18:59:13 -- spdk/autotest.sh@264 -- # timing_exit lib 00:08:20.743 18:59:13 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:20.743 18:59:13 -- common/autotest_common.sh@10 -- # set +x 00:08:20.743 18:59:13 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:08:20.743 18:59:13 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:08:20.743 18:59:13 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:08:20.743 18:59:13 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:08:20.743 18:59:13 -- spdk/autotest.sh@287 -- # '[' rdma = rdma ']' 00:08:20.743 18:59:13 -- spdk/autotest.sh@288 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:08:20.743 18:59:13 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:20.743 18:59:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.743 18:59:13 -- common/autotest_common.sh@10 -- # set +x 00:08:21.004 ************************************ 00:08:21.004 START TEST nvmf_rdma 00:08:21.004 ************************************ 00:08:21.004 18:59:13 nvmf_rdma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:08:21.004 * Looking for test storage... 00:08:21.004 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:08:21.004 18:59:13 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:08:21.004 18:59:13 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:21.004 18:59:13 nvmf_rdma -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:08:21.004 18:59:13 nvmf_rdma -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:21.004 18:59:13 nvmf_rdma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.004 18:59:13 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:21.004 ************************************ 00:08:21.004 START TEST nvmf_target_core 00:08:21.004 ************************************ 00:08:21.004 18:59:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:08:21.004 * Looking for test storage... 00:08:21.004 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:08:21.004 18:59:13 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:21.004 18:59:13 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:21.004 18:59:13 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:21.004 18:59:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:21.004 18:59:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.004 18:59:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.004 18:59:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.004 18:59:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.004 18:59:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.004 18:59:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.004 18:59:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.004 18:59:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.004 18:59:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.265 18:59:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.265 18:59:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:08:21.265 18:59:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:08:21.265 18:59:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.265 18:59:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.265 18:59:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:21.265 18:59:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:21.265 18:59:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:21.265 18:59:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.265 18:59:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.265 18:59:13 nvmf_rdma.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.265 18:59:13 nvmf_rdma.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.265 18:59:13 nvmf_rdma.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:21.266 ************************************ 00:08:21.266 START TEST nvmf_abort 00:08:21.266 ************************************ 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:08:21.266 * Looking for test storage... 00:08:21.266 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:08:21.266 18:59:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:27.841 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:27.841 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:08:27.841 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:27.841 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:27.841 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:27.841 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:27.841 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:27.841 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:08:27.841 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:27.841 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:08:27.841 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:08:27.841 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:08:27.841 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:08:27.841 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:08:27.841 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:08:27.841 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:27.841 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:27.841 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:27.841 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:27.841 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:27.841 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:27.841 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:08:27.842 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:08:27.842 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:08:27.842 Found net devices under 0000:af:00.0: mlx_0_0 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:08:27.842 Found net devices under 0000:af:00.1: mlx_0_1 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # rdma_device_init 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # uname 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:27.842 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:27.842 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:08:27.842 altname enp175s0f0np0 00:08:27.842 altname ens801f0np0 00:08:27.842 inet 192.168.100.8/24 scope global mlx_0_0 00:08:27.842 valid_lft forever preferred_lft forever 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:27.842 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:27.842 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:08:27.842 altname enp175s0f1np1 00:08:27.842 altname ens801f1np1 00:08:27.842 inet 192.168.100.9/24 scope global mlx_0_1 00:08:27.842 valid_lft forever preferred_lft forever 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:27.842 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:27.843 192.168.100.9' 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:27.843 192.168.100.9' 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@457 -- # head -n 1 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:27.843 192.168.100.9' 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@458 -- # tail -n +2 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@458 -- # head -n 1 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=627033 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 627033 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 627033 ']' 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:27.843 18:59:19 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:27.843 [2024-07-25 18:59:19.576406] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:27.843 [2024-07-25 18:59:19.576452] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.843 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.843 [2024-07-25 18:59:19.645289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:27.843 [2024-07-25 18:59:19.717590] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.843 [2024-07-25 18:59:19.717633] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.843 [2024-07-25 18:59:19.717640] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:27.843 [2024-07-25 18:59:19.717646] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:27.843 [2024-07-25 18:59:19.717650] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.843 [2024-07-25 18:59:19.717779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:27.843 [2024-07-25 18:59:19.717892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.843 [2024-07-25 18:59:19.717892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:28.102 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:28.102 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:08:28.102 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:28.102 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:28.103 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:28.103 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.103 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:08:28.103 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.103 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:28.103 [2024-07-25 18:59:20.479908] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe2a580/0xe2ea70) succeed. 00:08:28.103 [2024-07-25 18:59:20.495844] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe2bb20/0xe70110) succeed. 00:08:28.362 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.362 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:28.362 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.362 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:28.362 Malloc0 00:08:28.362 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.362 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:28.362 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.362 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:28.362 Delay0 00:08:28.362 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.362 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:28.362 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.362 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:28.362 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.362 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:28.362 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.362 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:28.362 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.362 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:28.362 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.362 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:28.362 [2024-07-25 18:59:20.660669] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:28.362 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.362 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:28.362 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.362 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:28.362 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.362 18:59:20 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:28.362 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.362 [2024-07-25 18:59:20.772488] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:30.900 Initializing NVMe Controllers 00:08:30.900 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:08:30.900 controller IO queue size 128 less than required 00:08:30.900 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:30.900 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:30.900 Initialization complete. Launching workers. 00:08:30.900 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 50072 00:08:30.900 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 50133, failed to submit 62 00:08:30.900 success 50073, unsuccess 60, failed 0 00:08:30.900 18:59:22 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:30.900 18:59:22 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.900 18:59:22 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:30.900 18:59:22 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.900 18:59:22 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:30.900 18:59:22 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:30.900 18:59:22 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:30.900 18:59:22 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:08:30.900 18:59:22 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:30.900 18:59:22 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:30.900 18:59:22 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:08:30.900 18:59:22 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:30.900 18:59:22 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:30.900 rmmod nvme_rdma 00:08:30.900 rmmod nvme_fabrics 00:08:30.900 18:59:22 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:30.900 18:59:22 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:08:30.900 18:59:22 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:08:30.900 18:59:22 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 627033 ']' 00:08:30.900 18:59:22 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 627033 00:08:30.900 18:59:22 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 627033 ']' 00:08:30.901 18:59:22 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 627033 00:08:30.901 18:59:22 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:08:30.901 18:59:22 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:30.901 18:59:22 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 627033 00:08:30.901 18:59:22 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:30.901 18:59:22 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:30.901 18:59:22 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 627033' 00:08:30.901 killing process with pid 627033 00:08:30.901 18:59:22 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 627033 00:08:30.901 18:59:22 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 627033 00:08:30.901 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:30.901 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:30.901 00:08:30.901 real 0m9.728s 00:08:30.901 user 0m14.429s 00:08:30.901 sys 0m4.873s 00:08:30.901 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.901 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:30.901 ************************************ 00:08:30.901 END TEST nvmf_abort 00:08:30.901 ************************************ 00:08:30.901 18:59:23 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:08:30.901 18:59:23 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:30.901 18:59:23 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.901 18:59:23 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:30.901 ************************************ 00:08:30.901 START TEST nvmf_ns_hotplug_stress 00:08:30.901 ************************************ 00:08:30.901 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:08:31.160 * Looking for test storage... 00:08:31.160 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:31.160 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:31.160 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:31.160 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.160 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.160 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.160 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.160 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.160 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.160 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.160 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.160 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.160 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.160 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:08:31.160 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:08:31.160 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.160 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.160 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:31.160 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:31.160 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:31.160 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.160 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.160 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.160 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.160 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.160 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.160 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:31.160 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.160 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:08:31.160 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:31.160 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:31.160 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:31.160 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.161 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.161 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:31.161 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:31.161 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:31.161 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:31.161 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:31.161 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:31.161 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:31.161 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:31.161 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:31.161 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:31.161 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.161 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:31.161 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.161 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:31.161 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:31.161 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:31.161 18:59:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:08:37.734 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:08:37.734 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:08:37.734 Found net devices under 0000:af:00.0: mlx_0_0 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:08:37.734 Found net devices under 0000:af:00.1: mlx_0_1 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.734 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # uname 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:37.735 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:37.735 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:08:37.735 altname enp175s0f0np0 00:08:37.735 altname ens801f0np0 00:08:37.735 inet 192.168.100.8/24 scope global mlx_0_0 00:08:37.735 valid_lft forever preferred_lft forever 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:37.735 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:37.735 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:08:37.735 altname enp175s0f1np1 00:08:37.735 altname ens801f1np1 00:08:37.735 inet 192.168.100.9/24 scope global mlx_0_1 00:08:37.735 valid_lft forever preferred_lft forever 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:37.735 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:37.736 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:37.736 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:37.736 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:37.736 192.168.100.9' 00:08:37.736 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:37.736 192.168.100.9' 00:08:37.736 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # head -n 1 00:08:37.736 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:37.736 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:37.736 192.168.100.9' 00:08:37.736 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # tail -n +2 00:08:37.736 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # head -n 1 00:08:37.736 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:37.736 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:37.736 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:37.736 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:37.736 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:37.736 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:37.736 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:37.736 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:37.736 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:37.736 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.736 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=630844 00:08:37.736 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:37.736 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 630844 00:08:37.736 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 630844 ']' 00:08:37.736 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.736 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:37.736 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.736 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:37.736 18:59:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.736 [2024-07-25 18:59:29.373433] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:37.736 [2024-07-25 18:59:29.373491] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.736 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.736 [2024-07-25 18:59:29.442303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:37.736 [2024-07-25 18:59:29.523792] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:37.736 [2024-07-25 18:59:29.523824] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:37.736 [2024-07-25 18:59:29.523831] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:37.736 [2024-07-25 18:59:29.523837] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:37.736 [2024-07-25 18:59:29.523842] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:37.736 [2024-07-25 18:59:29.523950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:37.736 [2024-07-25 18:59:29.523981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.736 [2024-07-25 18:59:29.523982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:37.995 18:59:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:37.995 18:59:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:08:37.995 18:59:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:37.995 18:59:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:37.995 18:59:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.995 18:59:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.995 18:59:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:37.995 18:59:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:37.995 [2024-07-25 18:59:30.443847] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x197f580/0x1983a70) succeed. 00:08:37.995 [2024-07-25 18:59:30.453200] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1980b20/0x19c5110) succeed. 00:08:38.255 18:59:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:38.514 18:59:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:38.514 [2024-07-25 18:59:30.940246] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:38.514 18:59:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:38.772 18:59:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:39.032 Malloc0 00:08:39.032 18:59:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:39.291 Delay0 00:08:39.291 18:59:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.291 18:59:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:39.550 NULL1 00:08:39.550 18:59:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:39.809 18:59:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:39.809 18:59:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=631347 00:08:39.809 18:59:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 631347 00:08:39.810 18:59:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.810 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.188 Read completed with error (sct=0, sc=11) 00:08:41.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.188 18:59:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.188 18:59:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:41.188 18:59:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:41.447 true 00:08:41.447 18:59:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 631347 00:08:41.447 18:59:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.016 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.275 18:59:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:42.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.275 18:59:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:42.275 18:59:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:42.533 true 00:08:42.533 18:59:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 631347 00:08:42.533 18:59:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.468 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.468 18:59:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.468 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.468 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.468 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.468 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.468 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.468 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.468 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.468 18:59:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:43.468 18:59:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:43.727 true 00:08:43.727 18:59:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 631347 00:08:43.727 18:59:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.664 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.665 18:59:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.665 18:59:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:44.665 18:59:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:44.923 true 00:08:44.923 18:59:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 631347 00:08:44.923 18:59:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.861 18:59:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:45.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.861 18:59:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:45.861 18:59:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:46.120 true 00:08:46.120 18:59:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 631347 00:08:46.120 18:59:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:47.058 18:59:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:47.317 18:59:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:47.317 18:59:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:47.317 true 00:08:47.317 18:59:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 631347 00:08:47.317 18:59:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.628 18:59:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:47.887 18:59:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:47.887 18:59:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:47.887 true 00:08:47.887 18:59:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 631347 00:08:47.887 18:59:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.264 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.264 18:59:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.264 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.264 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.264 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.264 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.264 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.264 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.264 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.264 18:59:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:49.264 18:59:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:49.264 true 00:08:49.523 18:59:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 631347 00:08:49.523 18:59:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.461 18:59:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.461 18:59:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:50.461 18:59:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:50.720 true 00:08:50.720 18:59:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 631347 00:08:50.720 18:59:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.549 18:59:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.549 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.549 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.549 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.549 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.549 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.549 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.549 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.549 18:59:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:51.549 18:59:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:51.809 true 00:08:51.809 18:59:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 631347 00:08:51.809 18:59:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.746 18:59:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.746 18:59:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:52.746 18:59:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:53.005 true 00:08:53.005 18:59:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 631347 00:08:53.005 18:59:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.941 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.941 18:59:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.941 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.941 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.941 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.941 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.941 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.941 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.941 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.941 18:59:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:53.941 18:59:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:54.201 true 00:08:54.201 18:59:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 631347 00:08:54.201 18:59:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.139 18:59:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.139 18:59:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:55.139 18:59:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:55.398 true 00:08:55.398 18:59:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 631347 00:08:55.398 18:59:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.657 18:59:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.917 18:59:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:55.917 18:59:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:55.917 true 00:08:56.222 18:59:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 631347 00:08:56.222 18:59:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.160 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:57.160 18:59:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.160 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:57.160 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:57.160 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:57.160 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:57.160 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:57.160 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:57.419 18:59:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:57.419 18:59:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:57.419 true 00:08:57.419 18:59:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 631347 00:08:57.419 18:59:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.357 18:59:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.617 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.617 18:59:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:58.617 18:59:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:58.617 true 00:08:58.617 18:59:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 631347 00:08:58.617 18:59:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.557 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.557 18:59:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.557 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.557 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.557 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.557 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.557 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.557 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.816 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.816 18:59:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:59.816 18:59:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:59.816 true 00:08:59.816 18:59:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 631347 00:08:59.816 18:59:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.753 18:59:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.012 18:59:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:01.012 18:59:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:01.012 true 00:09:01.268 18:59:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 631347 00:09:01.268 18:59:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.836 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.836 18:59:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:02.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.095 18:59:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:02.095 18:59:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:02.353 true 00:09:02.353 18:59:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 631347 00:09:02.353 18:59:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.288 18:59:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:03.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.288 18:59:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:03.288 18:59:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:03.546 true 00:09:03.546 18:59:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 631347 00:09:03.546 18:59:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.805 18:59:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.064 18:59:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:04.064 18:59:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:04.064 true 00:09:04.064 18:59:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 631347 00:09:04.064 18:59:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:05.440 18:59:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:05.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:05.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:05.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:05.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:05.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:05.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:05.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:05.440 18:59:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:05.440 18:59:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:05.699 true 00:09:05.699 18:59:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 631347 00:09:05.699 18:59:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.637 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.637 18:59:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.637 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.637 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.637 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.637 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.637 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.637 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.637 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.637 18:59:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:06.637 18:59:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:06.896 true 00:09:06.896 18:59:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 631347 00:09:06.896 18:59:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.832 18:59:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:07.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.832 19:00:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:07.832 19:00:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:08.090 true 00:09:08.090 19:00:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 631347 00:09:08.090 19:00:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.024 19:00:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:09.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.024 19:00:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:09.024 19:00:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:09.280 true 00:09:09.280 19:00:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 631347 00:09:09.280 19:00:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.212 19:00:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.212 19:00:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:10.212 19:00:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:10.470 true 00:09:10.470 19:00:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 631347 00:09:10.470 19:00:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.728 19:00:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.986 19:00:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:10.986 19:00:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:10.986 true 00:09:10.986 19:00:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 631347 00:09:10.986 19:00:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.244 19:00:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:11.502 19:00:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:11.502 19:00:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:11.759 true 00:09:11.759 19:00:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 631347 00:09:11.759 19:00:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.759 19:00:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:12.017 Initializing NVMe Controllers 00:09:12.017 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:12.017 Controller IO queue size 128, less than required. 00:09:12.017 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:12.017 Controller IO queue size 128, less than required. 00:09:12.017 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:12.017 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:12.017 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:12.017 Initialization complete. Launching workers. 00:09:12.017 ======================================================== 00:09:12.017 Latency(us) 00:09:12.017 Device Information : IOPS MiB/s Average min max 00:09:12.017 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5289.50 2.58 21909.22 837.58 1007182.04 00:09:12.017 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 34084.00 16.64 3755.32 2370.99 304546.31 00:09:12.017 ======================================================== 00:09:12.017 Total : 39373.50 19.23 6194.14 837.58 1007182.04 00:09:12.017 00:09:12.017 19:00:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:12.017 19:00:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:12.275 true 00:09:12.275 19:00:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 631347 00:09:12.275 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (631347) - No such process 00:09:12.275 19:00:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 631347 00:09:12.275 19:00:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.532 19:00:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:12.790 19:00:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:12.790 19:00:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:12.790 19:00:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:12.790 19:00:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:12.790 19:00:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:12.790 null0 00:09:12.790 19:00:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:12.790 19:00:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:12.790 19:00:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:13.047 null1 00:09:13.047 19:00:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:13.047 19:00:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:13.047 19:00:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:13.306 null2 00:09:13.306 19:00:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:13.306 19:00:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:13.306 19:00:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:13.306 null3 00:09:13.564 19:00:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:13.564 19:00:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:13.564 19:00:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:13.564 null4 00:09:13.564 19:00:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:13.564 19:00:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:13.564 19:00:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:13.821 null5 00:09:13.821 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:13.821 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:13.821 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:14.081 null6 00:09:14.081 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:14.081 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:14.081 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:14.081 null7 00:09:14.340 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:14.340 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:14.340 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:14.340 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:14.340 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:14.340 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:14.340 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:14.340 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:14.340 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:14.340 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:14.340 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.340 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:14.340 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:14.340 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 637394 637395 637397 637399 637401 637403 637405 637407 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:14.341 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:14.601 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.601 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.601 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:14.601 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.601 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.601 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:14.601 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.601 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.601 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.601 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:14.601 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.601 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:14.601 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.601 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.601 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:14.601 19:00:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.601 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.601 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.601 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.601 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:14.601 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:14.601 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.601 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.601 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:14.861 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:14.861 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:14.861 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:14.861 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:14.861 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:14.861 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:14.861 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.861 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:15.120 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.120 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.120 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:15.120 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.120 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.121 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:15.121 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.121 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.121 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.121 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:15.121 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.121 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:15.121 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.121 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.121 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.121 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.121 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:15.121 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:15.121 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.121 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.121 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:15.121 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.121 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.121 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:15.379 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:15.379 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:15.379 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.379 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:15.379 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:15.379 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:15.379 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:15.379 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:15.379 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.379 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.379 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:15.379 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.379 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.380 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:15.380 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.380 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.380 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:15.380 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.380 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.380 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:15.380 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.380 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.380 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:15.380 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.380 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.380 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.380 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.380 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:15.380 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:15.380 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.380 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.380 19:00:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:15.639 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:15.639 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.639 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:15.639 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:15.639 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:15.639 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:15.639 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:15.639 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:15.897 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.897 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.897 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:15.897 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.897 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.897 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:15.897 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.897 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.897 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:15.897 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.897 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.897 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.897 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:15.897 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.897 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:15.897 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.897 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.897 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:15.897 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.897 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.897 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:15.897 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.897 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.897 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:16.156 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:16.156 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:16.156 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:16.156 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:16.156 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.156 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:16.156 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:16.156 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:16.156 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.156 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.156 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:16.415 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.415 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.415 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:16.415 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.415 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.415 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.415 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.415 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:16.415 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:16.415 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.415 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.415 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:16.415 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.415 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.415 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:16.415 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.415 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.415 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:16.415 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.415 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.415 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:16.415 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:16.415 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:16.415 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.415 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:16.415 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:16.415 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:16.415 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:16.682 19:00:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:16.682 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.682 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.682 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:16.682 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.682 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.682 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:16.682 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.683 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.683 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:16.683 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.683 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.683 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:16.683 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.683 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.683 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:16.683 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.683 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.683 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:16.683 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.683 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.683 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:16.683 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.683 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.683 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:16.941 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.941 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:16.941 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:16.941 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:16.941 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:16.941 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:16.941 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:16.942 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:17.201 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.201 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.201 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:17.201 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.201 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.201 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:17.201 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.201 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.202 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:17.202 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.202 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.202 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:17.202 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.202 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.202 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:17.202 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.202 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.202 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:17.202 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.202 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.202 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:17.202 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.202 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.202 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:17.202 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.202 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:17.202 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:17.202 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:17.461 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:17.461 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:17.461 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:17.461 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:17.461 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.461 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.461 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:17.461 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.461 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.461 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:17.461 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.461 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.461 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:17.461 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.461 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.461 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:17.461 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.461 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.461 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:17.461 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.461 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.461 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:17.461 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.461 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.461 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:17.461 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.461 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.461 19:00:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:17.720 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.720 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:17.720 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:17.720 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:17.720 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:17.721 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:17.721 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:17.721 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:17.980 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.980 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.980 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:17.980 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.980 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.980 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:17.980 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.980 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.980 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:17.980 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.980 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.980 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:17.980 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.980 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.980 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:17.980 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.980 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.980 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:17.980 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.980 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.980 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:17.980 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.980 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.980 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:18.238 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.238 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:18.238 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:18.238 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:18.238 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:18.238 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:18.238 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:18.238 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:18.238 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.239 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.239 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.239 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.239 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.239 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.239 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.239 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.239 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.239 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.239 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.239 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.239 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.239 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.498 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.498 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.498 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:18.498 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:18.498 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:18.498 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:09:18.498 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:18.498 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:18.498 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:09:18.498 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:18.498 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:18.498 rmmod nvme_rdma 00:09:18.498 rmmod nvme_fabrics 00:09:18.498 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:18.498 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:09:18.498 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:09:18.498 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 630844 ']' 00:09:18.498 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 630844 00:09:18.498 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 630844 ']' 00:09:18.498 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 630844 00:09:18.498 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:09:18.498 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:18.498 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 630844 00:09:18.498 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:18.498 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:18.498 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 630844' 00:09:18.498 killing process with pid 630844 00:09:18.498 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 630844 00:09:18.498 19:00:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 630844 00:09:18.765 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:18.765 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:18.765 00:09:18.765 real 0m47.744s 00:09:18.765 user 3m23.113s 00:09:18.765 sys 0m11.667s 00:09:18.765 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:18.765 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:18.765 ************************************ 00:09:18.765 END TEST nvmf_ns_hotplug_stress 00:09:18.765 ************************************ 00:09:18.765 19:00:11 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:09:18.765 19:00:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:18.765 19:00:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:18.765 19:00:11 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:18.765 ************************************ 00:09:18.765 START TEST nvmf_delete_subsystem 00:09:18.765 ************************************ 00:09:18.765 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:09:18.765 * Looking for test storage... 00:09:18.765 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:18.765 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:18.765 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:19.025 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:19.026 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:19.026 19:00:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:09:25.594 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:25.594 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:09:25.595 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:09:25.595 Found net devices under 0000:af:00.0: mlx_0_0 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:09:25.595 Found net devices under 0000:af:00.1: mlx_0_1 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # rdma_device_init 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # uname 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:25.595 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:25.595 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:09:25.595 altname enp175s0f0np0 00:09:25.595 altname ens801f0np0 00:09:25.595 inet 192.168.100.8/24 scope global mlx_0_0 00:09:25.595 valid_lft forever preferred_lft forever 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:25.595 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:25.595 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:25.595 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:09:25.595 altname enp175s0f1np1 00:09:25.595 altname ens801f1np1 00:09:25.596 inet 192.168.100.9/24 scope global mlx_0_1 00:09:25.596 valid_lft forever preferred_lft forever 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:25.596 192.168.100.9' 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:25.596 192.168.100.9' 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # head -n 1 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:25.596 192.168.100.9' 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # tail -n +2 00:09:25.596 19:00:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # head -n 1 00:09:25.596 19:00:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:25.596 19:00:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:25.596 19:00:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:25.596 19:00:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:25.596 19:00:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:25.596 19:00:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:25.596 19:00:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:25.596 19:00:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:25.596 19:00:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:25.596 19:00:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:25.596 19:00:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=641970 00:09:25.596 19:00:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 641970 00:09:25.596 19:00:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:25.596 19:00:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 641970 ']' 00:09:25.596 19:00:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.596 19:00:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:25.596 19:00:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.596 19:00:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:25.596 19:00:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:25.596 [2024-07-25 19:00:17.083600] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:25.596 [2024-07-25 19:00:17.083649] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.596 EAL: No free 2048 kB hugepages reported on node 1 00:09:25.596 [2024-07-25 19:00:17.151107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:25.596 [2024-07-25 19:00:17.233409] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:25.596 [2024-07-25 19:00:17.233447] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:25.596 [2024-07-25 19:00:17.233454] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:25.596 [2024-07-25 19:00:17.233460] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:25.596 [2024-07-25 19:00:17.233466] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:25.596 [2024-07-25 19:00:17.233522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:25.596 [2024-07-25 19:00:17.233523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.596 19:00:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:25.596 19:00:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:09:25.596 19:00:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:25.596 19:00:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:25.596 19:00:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:25.596 19:00:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:25.596 19:00:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:25.596 19:00:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.596 19:00:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:25.596 [2024-07-25 19:00:17.985921] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x973720/0x977c10) succeed. 00:09:25.596 [2024-07-25 19:00:17.995855] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x974c20/0x9b92b0) succeed. 00:09:25.855 19:00:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.855 19:00:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:25.855 19:00:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.855 19:00:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:25.855 19:00:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.855 19:00:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:25.855 19:00:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.855 19:00:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:25.855 [2024-07-25 19:00:18.084893] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:25.855 19:00:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.855 19:00:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:25.855 19:00:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.855 19:00:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:25.855 NULL1 00:09:25.855 19:00:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.855 19:00:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:25.855 19:00:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.856 19:00:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:25.856 Delay0 00:09:25.856 19:00:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.856 19:00:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:25.856 19:00:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.856 19:00:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:25.856 19:00:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.856 19:00:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=642157 00:09:25.856 19:00:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:25.856 19:00:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:25.856 EAL: No free 2048 kB hugepages reported on node 1 00:09:25.856 [2024-07-25 19:00:18.205509] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:27.758 19:00:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:27.758 19:00:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.758 19:00:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.136 NVMe io qpair process completion error 00:09:29.136 NVMe io qpair process completion error 00:09:29.136 NVMe io qpair process completion error 00:09:29.136 NVMe io qpair process completion error 00:09:29.136 NVMe io qpair process completion error 00:09:29.136 NVMe io qpair process completion error 00:09:29.136 19:00:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.136 19:00:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:29.136 19:00:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 642157 00:09:29.136 19:00:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:29.395 19:00:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:29.395 19:00:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 642157 00:09:29.395 19:00:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:29.966 Write completed with error (sct=0, sc=8) 00:09:29.966 starting I/O failed: -6 00:09:29.966 Write completed with error (sct=0, sc=8) 00:09:29.966 starting I/O failed: -6 00:09:29.966 Read completed with error (sct=0, sc=8) 00:09:29.966 starting I/O failed: -6 00:09:29.966 Read completed with error (sct=0, sc=8) 00:09:29.966 starting I/O failed: -6 00:09:29.966 Write completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Read completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.967 Write completed with error (sct=0, sc=8) 00:09:29.967 starting I/O failed: -6 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 starting I/O failed: -6 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 starting I/O failed: -6 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 starting I/O failed: -6 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 starting I/O failed: -6 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 starting I/O failed: -6 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 starting I/O failed: -6 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 starting I/O failed: -6 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 starting I/O failed: -6 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 starting I/O failed: -6 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 starting I/O failed: -6 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Read completed with error (sct=0, sc=8) 00:09:29.968 Write completed with error (sct=0, sc=8) 00:09:29.968 Initializing NVMe Controllers 00:09:29.968 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:29.968 Controller IO queue size 128, less than required. 00:09:29.968 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:29.968 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:29.968 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:29.968 Initialization complete. Launching workers. 00:09:29.968 ======================================================== 00:09:29.968 Latency(us) 00:09:29.968 Device Information : IOPS MiB/s Average min max 00:09:29.968 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.53 0.04 1592952.85 1000204.95 2972243.74 00:09:29.968 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.53 0.04 1593811.78 1000202.72 2972672.25 00:09:29.968 ======================================================== 00:09:29.968 Total : 161.06 0.08 1593382.32 1000202.72 2972672.25 00:09:29.968 00:09:29.968 19:00:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:29.968 19:00:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 642157 00:09:29.968 19:00:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:29.968 [2024-07-25 19:00:22.303781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:09:29.968 [2024-07-25 19:00:22.303820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:09:29.968 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:30.536 19:00:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:30.536 19:00:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 642157 00:09:30.536 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (642157) - No such process 00:09:30.536 19:00:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 642157 00:09:30.536 19:00:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:09:30.536 19:00:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 642157 00:09:30.536 19:00:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:09:30.536 19:00:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:30.536 19:00:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:09:30.536 19:00:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:30.536 19:00:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 642157 00:09:30.536 19:00:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:09:30.536 19:00:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:30.536 19:00:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:30.536 19:00:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:30.536 19:00:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:30.536 19:00:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.536 19:00:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:30.536 19:00:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.536 19:00:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:30.536 19:00:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.536 19:00:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:30.536 [2024-07-25 19:00:22.822028] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:30.536 19:00:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.536 19:00:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:30.536 19:00:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.536 19:00:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:30.536 19:00:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.537 19:00:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=642933 00:09:30.537 19:00:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:30.537 19:00:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:30.537 19:00:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 642933 00:09:30.537 19:00:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:30.537 EAL: No free 2048 kB hugepages reported on node 1 00:09:30.537 [2024-07-25 19:00:22.920109] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:31.103 19:00:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:31.103 19:00:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 642933 00:09:31.103 19:00:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:31.670 19:00:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:31.670 19:00:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 642933 00:09:31.670 19:00:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:31.928 19:00:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:31.928 19:00:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 642933 00:09:31.928 19:00:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:32.494 19:00:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:32.494 19:00:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 642933 00:09:32.494 19:00:24 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:33.061 19:00:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:33.061 19:00:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 642933 00:09:33.061 19:00:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:33.629 19:00:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:33.629 19:00:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 642933 00:09:33.629 19:00:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:34.197 19:00:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:34.197 19:00:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 642933 00:09:34.197 19:00:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:34.456 19:00:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:34.456 19:00:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 642933 00:09:34.456 19:00:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:35.024 19:00:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:35.024 19:00:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 642933 00:09:35.024 19:00:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:35.632 19:00:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:35.632 19:00:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 642933 00:09:35.632 19:00:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:36.198 19:00:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:36.198 19:00:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 642933 00:09:36.198 19:00:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:36.457 19:00:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:36.457 19:00:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 642933 00:09:36.457 19:00:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:37.024 19:00:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:37.024 19:00:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 642933 00:09:37.024 19:00:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:37.599 19:00:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:37.599 19:00:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 642933 00:09:37.599 19:00:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:37.599 Initializing NVMe Controllers 00:09:37.599 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:37.599 Controller IO queue size 128, less than required. 00:09:37.599 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:37.599 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:37.599 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:37.599 Initialization complete. Launching workers. 00:09:37.599 ======================================================== 00:09:37.599 Latency(us) 00:09:37.599 Device Information : IOPS MiB/s Average min max 00:09:37.599 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001296.77 1000058.95 1004032.07 00:09:37.599 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002350.00 1000091.60 1006210.87 00:09:37.599 ======================================================== 00:09:37.599 Total : 256.00 0.12 1001823.38 1000058.95 1006210.87 00:09:37.599 00:09:38.166 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:38.166 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 642933 00:09:38.166 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (642933) - No such process 00:09:38.166 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 642933 00:09:38.166 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:38.166 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:38.166 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:38.166 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:38.166 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:38.166 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:38.166 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:38.166 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:38.166 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:38.166 rmmod nvme_rdma 00:09:38.166 rmmod nvme_fabrics 00:09:38.166 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:38.166 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:38.166 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:38.166 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 641970 ']' 00:09:38.166 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 641970 00:09:38.166 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 641970 ']' 00:09:38.166 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 641970 00:09:38.166 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:09:38.166 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:38.166 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 641970 00:09:38.166 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:38.166 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:38.166 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 641970' 00:09:38.166 killing process with pid 641970 00:09:38.166 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 641970 00:09:38.166 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 641970 00:09:38.425 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:38.425 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:38.425 00:09:38.425 real 0m19.594s 00:09:38.425 user 0m50.061s 00:09:38.425 sys 0m5.378s 00:09:38.425 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.425 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.425 ************************************ 00:09:38.425 END TEST nvmf_delete_subsystem 00:09:38.425 ************************************ 00:09:38.425 19:00:30 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:09:38.425 19:00:30 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:38.425 19:00:30 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.425 19:00:30 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:38.425 ************************************ 00:09:38.425 START TEST nvmf_host_management 00:09:38.425 ************************************ 00:09:38.425 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:09:38.425 * Looking for test storage... 00:09:38.425 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:38.425 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:09:38.685 19:00:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:09:45.264 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:09:45.264 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:09:45.264 Found net devices under 0000:af:00.0: mlx_0_0 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:09:45.264 Found net devices under 0000:af:00.1: mlx_0_1 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:45.264 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # rdma_device_init 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # uname 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:45.265 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:45.265 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:09:45.265 altname enp175s0f0np0 00:09:45.265 altname ens801f0np0 00:09:45.265 inet 192.168.100.8/24 scope global mlx_0_0 00:09:45.265 valid_lft forever preferred_lft forever 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:45.265 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:45.265 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:09:45.265 altname enp175s0f1np1 00:09:45.265 altname ens801f1np1 00:09:45.265 inet 192.168.100.9/24 scope global mlx_0_1 00:09:45.265 valid_lft forever preferred_lft forever 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:45.265 192.168.100.9' 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:45.265 192.168.100.9' 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # head -n 1 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:45.265 192.168.100.9' 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@458 -- # tail -n +2 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@458 -- # head -n 1 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:45.265 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:45.266 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:45.266 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:45.266 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:45.266 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:45.266 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=647439 00:09:45.266 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 647439 00:09:45.266 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:45.266 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 647439 ']' 00:09:45.266 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.266 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:45.266 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.266 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:45.266 19:00:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:45.266 [2024-07-25 19:00:36.885837] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:45.266 [2024-07-25 19:00:36.885879] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.266 EAL: No free 2048 kB hugepages reported on node 1 00:09:45.266 [2024-07-25 19:00:36.952274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:45.266 [2024-07-25 19:00:37.029215] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:45.266 [2024-07-25 19:00:37.029253] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:45.266 [2024-07-25 19:00:37.029260] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:45.266 [2024-07-25 19:00:37.029266] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:45.266 [2024-07-25 19:00:37.029272] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:45.266 [2024-07-25 19:00:37.029381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:45.266 [2024-07-25 19:00:37.029491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:45.266 [2024-07-25 19:00:37.029597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.266 [2024-07-25 19:00:37.029599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:45.266 19:00:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:45.266 19:00:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:09:45.266 19:00:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:45.266 19:00:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:45.266 19:00:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:45.525 19:00:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:45.525 19:00:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:45.525 19:00:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.525 19:00:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:45.525 [2024-07-25 19:00:37.789410] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc460f0/0xc4a5e0) succeed. 00:09:45.525 [2024-07-25 19:00:37.798810] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc47730/0xc8bc80) succeed. 00:09:45.525 19:00:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.525 19:00:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:45.525 19:00:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:45.525 19:00:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:45.525 19:00:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:45.525 19:00:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:45.525 19:00:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:45.525 19:00:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.525 19:00:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:45.525 Malloc0 00:09:45.525 [2024-07-25 19:00:37.977639] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:45.525 19:00:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.525 19:00:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:45.525 19:00:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:45.525 19:00:37 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:45.783 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=647706 00:09:45.783 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 647706 /var/tmp/bdevperf.sock 00:09:45.784 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 647706 ']' 00:09:45.784 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:45.784 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:45.784 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:45.784 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:45.784 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:45.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:45.784 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:45.784 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:45.784 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:45.784 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:45.784 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:45.784 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:45.784 { 00:09:45.784 "params": { 00:09:45.784 "name": "Nvme$subsystem", 00:09:45.784 "trtype": "$TEST_TRANSPORT", 00:09:45.784 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:45.784 "adrfam": "ipv4", 00:09:45.784 "trsvcid": "$NVMF_PORT", 00:09:45.784 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:45.784 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:45.784 "hdgst": ${hdgst:-false}, 00:09:45.784 "ddgst": ${ddgst:-false} 00:09:45.784 }, 00:09:45.784 "method": "bdev_nvme_attach_controller" 00:09:45.784 } 00:09:45.784 EOF 00:09:45.784 )") 00:09:45.784 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:45.784 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:45.784 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:45.784 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:45.784 "params": { 00:09:45.784 "name": "Nvme0", 00:09:45.784 "trtype": "rdma", 00:09:45.784 "traddr": "192.168.100.8", 00:09:45.784 "adrfam": "ipv4", 00:09:45.784 "trsvcid": "4420", 00:09:45.784 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:45.784 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:45.784 "hdgst": false, 00:09:45.784 "ddgst": false 00:09:45.784 }, 00:09:45.784 "method": "bdev_nvme_attach_controller" 00:09:45.784 }' 00:09:45.784 [2024-07-25 19:00:38.070616] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:45.784 [2024-07-25 19:00:38.070661] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid647706 ] 00:09:45.784 EAL: No free 2048 kB hugepages reported on node 1 00:09:45.784 [2024-07-25 19:00:38.139422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.784 [2024-07-25 19:00:38.215545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.042 Running I/O for 10 seconds... 00:09:46.610 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:46.610 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:09:46.610 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:46.610 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.611 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:46.611 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.611 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:46.611 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:46.611 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:46.611 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:46.611 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:46.611 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:46.611 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:46.611 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:46.611 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:46.611 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:46.611 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.611 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:46.611 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.611 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1584 00:09:46.611 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1584 -ge 100 ']' 00:09:46.611 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:46.611 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:46.611 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:46.611 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:46.611 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.611 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:46.611 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.611 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:46.611 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.611 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:46.611 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.611 19:00:38 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:47.558 [2024-07-25 19:00:39.995363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:89728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a1aec0 len:0x10000 key:0x188000 00:09:47.558 [2024-07-25 19:00:39.995397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.558 [2024-07-25 19:00:39.995418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a0ae40 len:0x10000 key:0x188000 00:09:47.558 [2024-07-25 19:00:39.995426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.558 [2024-07-25 19:00:39.995434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:89984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea980 len:0x10000 key:0x188400 00:09:47.558 [2024-07-25 19:00:39.995441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.558 [2024-07-25 19:00:39.995450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:90112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138da900 len:0x10000 key:0x188400 00:09:47.558 [2024-07-25 19:00:39.995456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.558 [2024-07-25 19:00:39.995465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:90240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca880 len:0x10000 key:0x188400 00:09:47.558 [2024-07-25 19:00:39.995472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.558 [2024-07-25 19:00:39.995481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ba800 len:0x10000 key:0x188400 00:09:47.558 [2024-07-25 19:00:39.995487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.558 [2024-07-25 19:00:39.995495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa780 len:0x10000 key:0x188400 00:09:47.558 [2024-07-25 19:00:39.995502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.558 [2024-07-25 19:00:39.995511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:90624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001389a700 len:0x10000 key:0x188400 00:09:47.558 [2024-07-25 19:00:39.995518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.558 [2024-07-25 19:00:39.995526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001388a680 len:0x10000 key:0x188400 00:09:47.558 [2024-07-25 19:00:39.995533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.558 [2024-07-25 19:00:39.995541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001387a600 len:0x10000 key:0x188400 00:09:47.558 [2024-07-25 19:00:39.995549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.558 [2024-07-25 19:00:39.995557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:91008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001386a580 len:0x10000 key:0x188400 00:09:47.558 [2024-07-25 19:00:39.995564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.558 [2024-07-25 19:00:39.995573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001385a500 len:0x10000 key:0x188400 00:09:47.558 [2024-07-25 19:00:39.995579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.558 [2024-07-25 19:00:39.995589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001384a480 len:0x10000 key:0x188400 00:09:47.558 [2024-07-25 19:00:39.995597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.558 [2024-07-25 19:00:39.995605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001383a400 len:0x10000 key:0x188400 00:09:47.558 [2024-07-25 19:00:39.995612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.558 [2024-07-25 19:00:39.995620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001382a380 len:0x10000 key:0x188400 00:09:47.558 [2024-07-25 19:00:39.995627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.558 [2024-07-25 19:00:39.995635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:91648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0x188400 00:09:47.558 [2024-07-25 19:00:39.995642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.558 [2024-07-25 19:00:39.995650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:91776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001380a280 len:0x10000 key:0x188400 00:09:47.558 [2024-07-25 19:00:39.995657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.558 [2024-07-25 19:00:39.995666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:91904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192d1e80 len:0x10000 key:0x188700 00:09:47.558 [2024-07-25 19:00:39.995672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.558 [2024-07-25 19:00:39.995680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192c1e00 len:0x10000 key:0x188700 00:09:47.559 [2024-07-25 19:00:39.995687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.559 [2024-07-25 19:00:39.995695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:92160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192b1d80 len:0x10000 key:0x188700 00:09:47.559 [2024-07-25 19:00:39.995702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.559 [2024-07-25 19:00:39.995710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:92288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192a1d00 len:0x10000 key:0x188700 00:09:47.559 [2024-07-25 19:00:39.995717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.559 [2024-07-25 19:00:39.995726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:92416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019291c80 len:0x10000 key:0x188700 00:09:47.559 [2024-07-25 19:00:39.995733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.559 [2024-07-25 19:00:39.995741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019281c00 len:0x10000 key:0x188700 00:09:47.559 [2024-07-25 19:00:39.995748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.559 [2024-07-25 19:00:39.995756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:92672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019271b80 len:0x10000 key:0x188700 00:09:47.559 [2024-07-25 19:00:39.995764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.559 [2024-07-25 19:00:39.995773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019261b00 len:0x10000 key:0x188700 00:09:47.559 [2024-07-25 19:00:39.995780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.559 [2024-07-25 19:00:39.995788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019251a80 len:0x10000 key:0x188700 00:09:47.559 [2024-07-25 19:00:39.995795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.559 [2024-07-25 19:00:39.995803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:93056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019241a00 len:0x10000 key:0x188700 00:09:47.559 [2024-07-25 19:00:39.995810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.559 [2024-07-25 19:00:39.995819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:93184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019231980 len:0x10000 key:0x188700 00:09:47.559 [2024-07-25 19:00:39.995825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.559 [2024-07-25 19:00:39.995834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:93312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019221900 len:0x10000 key:0x188700 00:09:47.559 [2024-07-25 19:00:39.995841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.559 [2024-07-25 19:00:39.995849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:93440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019211880 len:0x10000 key:0x188700 00:09:47.559 [2024-07-25 19:00:39.995856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.559 [2024-07-25 19:00:39.995864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:93568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019201800 len:0x10000 key:0x188700 00:09:47.559 [2024-07-25 19:00:39.995871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.559 [2024-07-25 19:00:39.995879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:93696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190eff80 len:0x10000 key:0x188600 00:09:47.559 [2024-07-25 19:00:39.995886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.559 [2024-07-25 19:00:39.995894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:93824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190dff00 len:0x10000 key:0x188600 00:09:47.559 [2024-07-25 19:00:39.995904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.559 [2024-07-25 19:00:39.995917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:93952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cfe80 len:0x10000 key:0x188600 00:09:47.559 [2024-07-25 19:00:39.995925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.559 [2024-07-25 19:00:39.995933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bfe00 len:0x10000 key:0x188600 00:09:47.559 [2024-07-25 19:00:39.995942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.559 [2024-07-25 19:00:39.995950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x188600 00:09:47.559 [2024-07-25 19:00:39.995957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.559 [2024-07-25 19:00:39.995965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909fd00 len:0x10000 key:0x188600 00:09:47.559 [2024-07-25 19:00:39.995972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.559 [2024-07-25 19:00:39.995980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908fc80 len:0x10000 key:0x188600 00:09:47.559 [2024-07-25 19:00:39.995986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.559 [2024-07-25 19:00:39.995995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907fc00 len:0x10000 key:0x188600 00:09:47.559 [2024-07-25 19:00:39.996002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.559 [2024-07-25 19:00:39.996010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906fb80 len:0x10000 key:0x188600 00:09:47.559 [2024-07-25 19:00:39.996017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.559 [2024-07-25 19:00:39.996025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x188600 00:09:47.559 [2024-07-25 19:00:39.996032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.559 [2024-07-25 19:00:39.996040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904fa80 len:0x10000 key:0x188600 00:09:47.559 [2024-07-25 19:00:39.996047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.559 [2024-07-25 19:00:39.996056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903fa00 len:0x10000 key:0x188600 00:09:47.559 [2024-07-25 19:00:39.996063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.559 [2024-07-25 19:00:39.996071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f980 len:0x10000 key:0x188600 00:09:47.559 [2024-07-25 19:00:39.996078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.559 [2024-07-25 19:00:39.996086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901f900 len:0x10000 key:0x188600 00:09:47.559 [2024-07-25 19:00:39.996093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.559 [2024-07-25 19:00:39.996101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001900f880 len:0x10000 key:0x188600 00:09:47.559 [2024-07-25 19:00:39.996109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.559 [2024-07-25 19:00:39.996117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eeff80 len:0x10000 key:0x188500 00:09:47.559 [2024-07-25 19:00:39.996124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.559 [2024-07-25 19:00:39.996132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018edff00 len:0x10000 key:0x188500 00:09:47.559 [2024-07-25 19:00:39.996139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.559 [2024-07-25 19:00:39.996148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d9e6000 len:0x10000 key:0x188300 00:09:47.559 [2024-07-25 19:00:39.996155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.559 [2024-07-25 19:00:39.996163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da07000 len:0x10000 key:0x188300 00:09:47.559 [2024-07-25 19:00:39.996170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.559 [2024-07-25 19:00:39.996178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da28000 len:0x10000 key:0x188300 00:09:47.559 [2024-07-25 19:00:39.996184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.559 [2024-07-25 19:00:39.996193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da49000 len:0x10000 key:0x188300 00:09:47.559 [2024-07-25 19:00:39.996199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.559 [2024-07-25 19:00:39.996208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da6a000 len:0x10000 key:0x188300 00:09:47.559 [2024-07-25 19:00:39.996215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.560 [2024-07-25 19:00:39.996223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da8b000 len:0x10000 key:0x188300 00:09:47.560 [2024-07-25 19:00:39.996229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.560 [2024-07-25 19:00:39.996238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000daac000 len:0x10000 key:0x188300 00:09:47.560 [2024-07-25 19:00:39.996245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.560 [2024-07-25 19:00:39.996253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:88576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dacd000 len:0x10000 key:0x188300 00:09:47.560 [2024-07-25 19:00:39.996260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.560 [2024-07-25 19:00:39.996268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:88704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000daee000 len:0x10000 key:0x188300 00:09:47.560 [2024-07-25 19:00:39.996275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.560 [2024-07-25 19:00:39.996284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:88832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000db0f000 len:0x10000 key:0x188300 00:09:47.560 [2024-07-25 19:00:39.996291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.560 [2024-07-25 19:00:39.996299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:88960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d3b6000 len:0x10000 key:0x188300 00:09:47.560 [2024-07-25 19:00:39.996306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.560 [2024-07-25 19:00:39.996314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d3d7000 len:0x10000 key:0x188300 00:09:47.560 [2024-07-25 19:00:39.996321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.560 [2024-07-25 19:00:39.996329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:89216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d3f8000 len:0x10000 key:0x188300 00:09:47.560 [2024-07-25 19:00:39.996336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.560 [2024-07-25 19:00:39.996344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:89344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df2f000 len:0x10000 key:0x188300 00:09:47.560 [2024-07-25 19:00:39.996351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.560 [2024-07-25 19:00:39.996359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:89472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df0e000 len:0x10000 key:0x188300 00:09:47.560 [2024-07-25 19:00:39.996366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.560 [2024-07-25 19:00:39.996374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:89600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000deed000 len:0x10000 key:0x188300 00:09:47.560 [2024-07-25 19:00:39.996381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:d6162000 sqhd:52b0 p:0 m:0 dnr:0 00:09:47.560 [2024-07-25 19:00:39.997753] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019201580 was disconnected and freed. reset controller. 00:09:47.560 [2024-07-25 19:00:39.998663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:47.560 task offset: 89728 on job bdev=Nvme0n1 fails 00:09:47.560 00:09:47.560 Latency(us) 00:09:47.560 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:47.560 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:47.560 Job: Nvme0n1 ended in about 1.61 seconds with error 00:09:47.560 Verification LBA range: start 0x0 length 0x400 00:09:47.560 Nvme0n1 : 1.61 1063.98 66.50 39.84 0.00 57432.31 2179.78 1021221.84 00:09:47.560 =================================================================================================================== 00:09:47.560 Total : 1063.98 66.50 39.84 0.00 57432.31 2179.78 1021221.84 00:09:47.560 [2024-07-25 19:00:40.000296] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:47.560 19:00:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 647706 00:09:47.560 19:00:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:47.560 19:00:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:47.560 19:00:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:47.560 19:00:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:47.560 19:00:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:47.560 19:00:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:47.560 19:00:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:47.560 { 00:09:47.560 "params": { 00:09:47.560 "name": "Nvme$subsystem", 00:09:47.560 "trtype": "$TEST_TRANSPORT", 00:09:47.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:47.560 "adrfam": "ipv4", 00:09:47.560 "trsvcid": "$NVMF_PORT", 00:09:47.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:47.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:47.560 "hdgst": ${hdgst:-false}, 00:09:47.560 "ddgst": ${ddgst:-false} 00:09:47.560 }, 00:09:47.560 "method": "bdev_nvme_attach_controller" 00:09:47.560 } 00:09:47.560 EOF 00:09:47.560 )") 00:09:47.560 19:00:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:47.560 19:00:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:47.560 19:00:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:47.829 19:00:40 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:47.829 "params": { 00:09:47.829 "name": "Nvme0", 00:09:47.829 "trtype": "rdma", 00:09:47.829 "traddr": "192.168.100.8", 00:09:47.829 "adrfam": "ipv4", 00:09:47.829 "trsvcid": "4420", 00:09:47.829 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:47.829 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:47.829 "hdgst": false, 00:09:47.829 "ddgst": false 00:09:47.829 }, 00:09:47.829 "method": "bdev_nvme_attach_controller" 00:09:47.829 }' 00:09:47.829 [2024-07-25 19:00:40.052408] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:47.829 [2024-07-25 19:00:40.052457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid647966 ] 00:09:47.829 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.829 [2024-07-25 19:00:40.123399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.829 [2024-07-25 19:00:40.195964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.092 Running I/O for 1 seconds... 00:09:49.078 00:09:49.078 Latency(us) 00:09:49.078 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:49.078 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:49.078 Verification LBA range: start 0x0 length 0x400 00:09:49.078 Nvme0n1 : 1.02 2944.45 184.03 0.00 0.00 21282.55 911.81 45362.31 00:09:49.078 =================================================================================================================== 00:09:49.078 Total : 2944.45 184.03 0.00 0.00 21282.55 911.81 45362.31 00:09:49.390 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 647706 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:09:49.390 19:00:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:49.390 19:00:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:49.390 19:00:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:09:49.390 19:00:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:49.390 19:00:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:49.390 19:00:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:49.390 19:00:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:09:49.390 19:00:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:49.390 19:00:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:49.390 19:00:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:09:49.390 19:00:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:49.390 19:00:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:49.390 rmmod nvme_rdma 00:09:49.390 rmmod nvme_fabrics 00:09:49.390 19:00:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:49.390 19:00:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:09:49.390 19:00:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:09:49.390 19:00:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 647439 ']' 00:09:49.390 19:00:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 647439 00:09:49.390 19:00:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 647439 ']' 00:09:49.390 19:00:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 647439 00:09:49.390 19:00:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:09:49.390 19:00:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:49.390 19:00:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 647439 00:09:49.390 19:00:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:49.390 19:00:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:49.390 19:00:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 647439' 00:09:49.390 killing process with pid 647439 00:09:49.390 19:00:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 647439 00:09:49.390 19:00:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 647439 00:09:49.656 [2024-07-25 19:00:41.978514] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:49.656 19:00:41 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:49.656 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:49.656 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:49.656 00:09:49.656 real 0m11.197s 00:09:49.656 user 0m25.079s 00:09:49.656 sys 0m5.271s 00:09:49.656 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:49.656 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:49.656 ************************************ 00:09:49.656 END TEST nvmf_host_management 00:09:49.656 ************************************ 00:09:49.656 19:00:42 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:09:49.656 19:00:42 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:49.656 19:00:42 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:49.656 19:00:42 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:49.656 ************************************ 00:09:49.656 START TEST nvmf_lvol 00:09:49.656 ************************************ 00:09:49.656 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:09:49.925 * Looking for test storage... 00:09:49.925 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:49.925 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:49.926 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.926 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:49.926 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.926 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:49.926 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:49.926 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:09:49.926 19:00:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:56.648 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:56.648 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:09:56.649 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:09:56.649 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:09:56.649 Found net devices under 0000:af:00.0: mlx_0_0 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:09:56.649 Found net devices under 0000:af:00.1: mlx_0_1 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # rdma_device_init 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # uname 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:56.649 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:56.649 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:09:56.649 altname enp175s0f0np0 00:09:56.649 altname ens801f0np0 00:09:56.649 inet 192.168.100.8/24 scope global mlx_0_0 00:09:56.649 valid_lft forever preferred_lft forever 00:09:56.649 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:56.650 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:56.650 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:56.650 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:56.650 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:56.650 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:56.650 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:56.650 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:56.650 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:56.650 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:56.650 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:09:56.650 altname enp175s0f1np1 00:09:56.650 altname ens801f1np1 00:09:56.650 inet 192.168.100.9/24 scope global mlx_0_1 00:09:56.650 valid_lft forever preferred_lft forever 00:09:56.650 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:09:56.650 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:56.650 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:56.650 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:56.650 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:56.650 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:56.650 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:56.650 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:56.650 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:56.650 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:56.650 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:56.650 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:56.650 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:56.650 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:56.650 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:56.650 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:09:56.650 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:56.650 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:56.650 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:56.650 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:56.650 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:56.650 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:56.650 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:09:56.650 19:00:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:56.650 192.168.100.9' 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:56.650 192.168.100.9' 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # head -n 1 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:56.650 192.168.100.9' 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@458 -- # tail -n +2 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@458 -- # head -n 1 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=651547 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 651547 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 651547 ']' 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:56.650 [2024-07-25 19:00:48.116221] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:56.650 [2024-07-25 19:00:48.116270] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:56.650 EAL: No free 2048 kB hugepages reported on node 1 00:09:56.650 [2024-07-25 19:00:48.187079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:56.650 [2024-07-25 19:00:48.261242] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:56.650 [2024-07-25 19:00:48.261282] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:56.650 [2024-07-25 19:00:48.261289] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:56.650 [2024-07-25 19:00:48.261294] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:56.650 [2024-07-25 19:00:48.261299] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:56.650 [2024-07-25 19:00:48.261415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.650 [2024-07-25 19:00:48.261541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.650 [2024-07-25 19:00:48.261542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:56.650 19:00:48 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:56.932 [2024-07-25 19:00:49.186893] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1961280/0x1965770) succeed. 00:09:56.932 [2024-07-25 19:00:49.196057] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1962820/0x19a6e10) succeed. 00:09:56.932 19:00:49 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:57.210 19:00:49 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:57.210 19:00:49 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:57.488 19:00:49 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:57.488 19:00:49 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:57.488 19:00:49 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:57.780 19:00:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=2997f1fa-9db6-4155-b9a0-4688b06c3d71 00:09:57.780 19:00:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2997f1fa-9db6-4155-b9a0-4688b06c3d71 lvol 20 00:09:58.084 19:00:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=8ad7b20a-8178-424f-95f3-c5f89c79eb52 00:09:58.084 19:00:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:58.084 19:00:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8ad7b20a-8178-424f-95f3-c5f89c79eb52 00:09:58.351 19:00:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:09:58.629 [2024-07-25 19:00:50.883537] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:58.629 19:00:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:58.905 19:00:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=652062 00:09:58.905 19:00:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:58.905 19:00:51 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:58.905 EAL: No free 2048 kB hugepages reported on node 1 00:09:59.893 19:00:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 8ad7b20a-8178-424f-95f3-c5f89c79eb52 MY_SNAPSHOT 00:09:59.893 19:00:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2af39211-ee6f-440d-ac87-3b2ecc7121f8 00:09:59.893 19:00:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 8ad7b20a-8178-424f-95f3-c5f89c79eb52 30 00:10:00.173 19:00:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2af39211-ee6f-440d-ac87-3b2ecc7121f8 MY_CLONE 00:10:00.452 19:00:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f4598f16-3436-4f3e-bca8-63208a7da659 00:10:00.452 19:00:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f4598f16-3436-4f3e-bca8-63208a7da659 00:10:00.743 19:00:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 652062 00:10:10.809 Initializing NVMe Controllers 00:10:10.809 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:10:10.809 Controller IO queue size 128, less than required. 00:10:10.809 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:10.809 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:10.809 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:10.809 Initialization complete. Launching workers. 00:10:10.809 ======================================================== 00:10:10.809 Latency(us) 00:10:10.809 Device Information : IOPS MiB/s Average min max 00:10:10.809 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15943.40 62.28 8031.25 1961.00 50697.26 00:10:10.809 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15993.20 62.47 8005.60 2853.74 47409.54 00:10:10.809 ======================================================== 00:10:10.809 Total : 31936.60 124.75 8018.41 1961.00 50697.26 00:10:10.809 00:10:10.809 19:01:02 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:10.809 19:01:02 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8ad7b20a-8178-424f-95f3-c5f89c79eb52 00:10:10.809 19:01:02 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2997f1fa-9db6-4155-b9a0-4688b06c3d71 00:10:10.809 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:10.809 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:10.809 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:10.809 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:10.809 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:10:10.809 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:10.809 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:10.809 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:10:10.809 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:10.809 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:10.809 rmmod nvme_rdma 00:10:10.809 rmmod nvme_fabrics 00:10:10.809 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:10.809 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:10:10.809 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:10:10.809 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 651547 ']' 00:10:10.809 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 651547 00:10:10.809 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 651547 ']' 00:10:10.809 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 651547 00:10:10.809 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:10:10.809 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:10.809 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 651547 00:10:10.809 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:10.809 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:10.809 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 651547' 00:10:10.809 killing process with pid 651547 00:10:10.809 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 651547 00:10:10.809 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 651547 00:10:11.070 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:11.070 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:11.070 00:10:11.070 real 0m21.360s 00:10:11.070 user 1m12.040s 00:10:11.070 sys 0m5.419s 00:10:11.070 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:11.070 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:11.070 ************************************ 00:10:11.070 END TEST nvmf_lvol 00:10:11.070 ************************************ 00:10:11.070 19:01:03 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:10:11.070 19:01:03 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:11.070 19:01:03 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:11.070 19:01:03 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:11.070 ************************************ 00:10:11.070 START TEST nvmf_lvs_grow 00:10:11.070 ************************************ 00:10:11.070 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:10:11.331 * Looking for test storage... 00:10:11.331 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:10:11.331 19:01:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:17.905 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:17.905 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:10:17.905 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:17.905 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:17.905 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:17.905 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:17.905 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:17.905 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:10:17.905 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:17.905 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:10:17.905 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:10:17.905 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:10:17.905 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:10:17.906 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:10:17.906 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:10:17.906 Found net devices under 0000:af:00.0: mlx_0_0 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:10:17.906 Found net devices under 0000:af:00.1: mlx_0_1 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # rdma_device_init 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # uname 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:17.906 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:17.906 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:10:17.906 altname enp175s0f0np0 00:10:17.906 altname ens801f0np0 00:10:17.906 inet 192.168.100.8/24 scope global mlx_0_0 00:10:17.906 valid_lft forever preferred_lft forever 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:17.906 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:17.906 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:17.907 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:10:17.907 altname enp175s0f1np1 00:10:17.907 altname ens801f1np1 00:10:17.907 inet 192.168.100.9/24 scope global mlx_0_1 00:10:17.907 valid_lft forever preferred_lft forever 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:17.907 192.168.100.9' 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:17.907 192.168.100.9' 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # head -n 1 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:17.907 192.168.100.9' 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@458 -- # tail -n +2 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@458 -- # head -n 1 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=657284 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 657284 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 657284 ']' 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:17.907 19:01:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:17.907 [2024-07-25 19:01:09.569272] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:17.907 [2024-07-25 19:01:09.569322] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:17.907 EAL: No free 2048 kB hugepages reported on node 1 00:10:17.907 [2024-07-25 19:01:09.639745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.907 [2024-07-25 19:01:09.713654] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:17.907 [2024-07-25 19:01:09.713691] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:17.907 [2024-07-25 19:01:09.713699] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:17.907 [2024-07-25 19:01:09.713705] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:17.907 [2024-07-25 19:01:09.713710] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:17.907 [2024-07-25 19:01:09.713746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.166 19:01:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:18.166 19:01:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:10:18.166 19:01:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:18.166 19:01:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:18.166 19:01:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:18.166 19:01:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:18.166 19:01:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:18.426 [2024-07-25 19:01:10.641653] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c3bbb0/0x1c400a0) succeed. 00:10:18.426 [2024-07-25 19:01:10.650888] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c3d0b0/0x1c81740) succeed. 00:10:18.426 19:01:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:18.426 19:01:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:18.426 19:01:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:18.426 19:01:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:18.426 ************************************ 00:10:18.426 START TEST lvs_grow_clean 00:10:18.426 ************************************ 00:10:18.426 19:01:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:10:18.426 19:01:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:18.426 19:01:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:18.426 19:01:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:18.426 19:01:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:18.426 19:01:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:18.426 19:01:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:18.426 19:01:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:18.426 19:01:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:18.426 19:01:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:18.685 19:01:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:18.685 19:01:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:18.944 19:01:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=351a487c-1422-4ddb-803f-a10090e18cd2 00:10:18.944 19:01:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 351a487c-1422-4ddb-803f-a10090e18cd2 00:10:18.944 19:01:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:18.944 19:01:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:18.944 19:01:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:18.944 19:01:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 351a487c-1422-4ddb-803f-a10090e18cd2 lvol 150 00:10:19.204 19:01:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=1eb5ed3b-dd51-4223-a3a8-9ea0f8ab77b7 00:10:19.204 19:01:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:19.204 19:01:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:19.463 [2024-07-25 19:01:11.716877] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:19.463 [2024-07-25 19:01:11.716933] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:19.463 true 00:10:19.463 19:01:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 351a487c-1422-4ddb-803f-a10090e18cd2 00:10:19.463 19:01:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:19.463 19:01:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:19.463 19:01:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:19.722 19:01:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1eb5ed3b-dd51-4223-a3a8-9ea0f8ab77b7 00:10:19.981 19:01:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:10:20.240 [2024-07-25 19:01:12.455290] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:20.240 19:01:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:20.240 19:01:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=657936 00:10:20.240 19:01:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:20.240 19:01:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:20.240 19:01:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 657936 /var/tmp/bdevperf.sock 00:10:20.240 19:01:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 657936 ']' 00:10:20.240 19:01:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:20.241 19:01:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:20.241 19:01:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:20.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:20.241 19:01:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:20.241 19:01:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:20.241 [2024-07-25 19:01:12.692267] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:20.241 [2024-07-25 19:01:12.692317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid657936 ] 00:10:20.500 EAL: No free 2048 kB hugepages reported on node 1 00:10:20.500 [2024-07-25 19:01:12.744278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.500 [2024-07-25 19:01:12.816368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.500 19:01:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:20.500 19:01:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:10:20.500 19:01:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:20.759 Nvme0n1 00:10:20.759 19:01:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:21.019 [ 00:10:21.019 { 00:10:21.019 "name": "Nvme0n1", 00:10:21.019 "aliases": [ 00:10:21.019 "1eb5ed3b-dd51-4223-a3a8-9ea0f8ab77b7" 00:10:21.019 ], 00:10:21.019 "product_name": "NVMe disk", 00:10:21.019 "block_size": 4096, 00:10:21.019 "num_blocks": 38912, 00:10:21.019 "uuid": "1eb5ed3b-dd51-4223-a3a8-9ea0f8ab77b7", 00:10:21.019 "assigned_rate_limits": { 00:10:21.019 "rw_ios_per_sec": 0, 00:10:21.019 "rw_mbytes_per_sec": 0, 00:10:21.019 "r_mbytes_per_sec": 0, 00:10:21.019 "w_mbytes_per_sec": 0 00:10:21.019 }, 00:10:21.019 "claimed": false, 00:10:21.019 "zoned": false, 00:10:21.019 "supported_io_types": { 00:10:21.019 "read": true, 00:10:21.019 "write": true, 00:10:21.019 "unmap": true, 00:10:21.019 "flush": true, 00:10:21.019 "reset": true, 00:10:21.019 "nvme_admin": true, 00:10:21.019 "nvme_io": true, 00:10:21.019 "nvme_io_md": false, 00:10:21.019 "write_zeroes": true, 00:10:21.019 "zcopy": false, 00:10:21.019 "get_zone_info": false, 00:10:21.019 "zone_management": false, 00:10:21.019 "zone_append": false, 00:10:21.019 "compare": true, 00:10:21.019 "compare_and_write": true, 00:10:21.019 "abort": true, 00:10:21.019 "seek_hole": false, 00:10:21.019 "seek_data": false, 00:10:21.019 "copy": true, 00:10:21.019 "nvme_iov_md": false 00:10:21.019 }, 00:10:21.019 "memory_domains": [ 00:10:21.019 { 00:10:21.019 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:10:21.019 "dma_device_type": 0 00:10:21.019 } 00:10:21.019 ], 00:10:21.019 "driver_specific": { 00:10:21.019 "nvme": [ 00:10:21.019 { 00:10:21.019 "trid": { 00:10:21.019 "trtype": "RDMA", 00:10:21.019 "adrfam": "IPv4", 00:10:21.019 "traddr": "192.168.100.8", 00:10:21.019 "trsvcid": "4420", 00:10:21.019 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:21.019 }, 00:10:21.019 "ctrlr_data": { 00:10:21.019 "cntlid": 1, 00:10:21.019 "vendor_id": "0x8086", 00:10:21.019 "model_number": "SPDK bdev Controller", 00:10:21.019 "serial_number": "SPDK0", 00:10:21.019 "firmware_revision": "24.09", 00:10:21.019 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:21.019 "oacs": { 00:10:21.019 "security": 0, 00:10:21.019 "format": 0, 00:10:21.019 "firmware": 0, 00:10:21.019 "ns_manage": 0 00:10:21.019 }, 00:10:21.019 "multi_ctrlr": true, 00:10:21.019 "ana_reporting": false 00:10:21.019 }, 00:10:21.019 "vs": { 00:10:21.019 "nvme_version": "1.3" 00:10:21.019 }, 00:10:21.019 "ns_data": { 00:10:21.019 "id": 1, 00:10:21.019 "can_share": true 00:10:21.019 } 00:10:21.019 } 00:10:21.019 ], 00:10:21.019 "mp_policy": "active_passive" 00:10:21.019 } 00:10:21.019 } 00:10:21.019 ] 00:10:21.019 19:01:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=658032 00:10:21.019 19:01:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:21.019 19:01:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:21.019 Running I/O for 10 seconds... 00:10:22.397 Latency(us) 00:10:22.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:22.397 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:22.397 Nvme0n1 : 1.00 33728.00 131.75 0.00 0.00 0.00 0.00 0.00 00:10:22.397 =================================================================================================================== 00:10:22.397 Total : 33728.00 131.75 0.00 0.00 0.00 0.00 0.00 00:10:22.397 00:10:22.967 19:01:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 351a487c-1422-4ddb-803f-a10090e18cd2 00:10:23.226 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:23.226 Nvme0n1 : 2.00 33985.50 132.76 0.00 0.00 0.00 0.00 0.00 00:10:23.226 =================================================================================================================== 00:10:23.226 Total : 33985.50 132.76 0.00 0.00 0.00 0.00 0.00 00:10:23.226 00:10:23.226 true 00:10:23.226 19:01:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 351a487c-1422-4ddb-803f-a10090e18cd2 00:10:23.226 19:01:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:23.485 19:01:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:23.485 19:01:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:23.485 19:01:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 658032 00:10:24.053 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:24.053 Nvme0n1 : 3.00 34069.67 133.08 0.00 0.00 0.00 0.00 0.00 00:10:24.053 =================================================================================================================== 00:10:24.053 Total : 34069.67 133.08 0.00 0.00 0.00 0.00 0.00 00:10:24.053 00:10:25.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:25.433 Nvme0n1 : 4.00 34176.25 133.50 0.00 0.00 0.00 0.00 0.00 00:10:25.433 =================================================================================================================== 00:10:25.433 Total : 34176.25 133.50 0.00 0.00 0.00 0.00 0.00 00:10:25.433 00:10:26.371 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:26.371 Nvme0n1 : 5.00 34253.00 133.80 0.00 0.00 0.00 0.00 0.00 00:10:26.371 =================================================================================================================== 00:10:26.371 Total : 34253.00 133.80 0.00 0.00 0.00 0.00 0.00 00:10:26.371 00:10:27.309 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:27.309 Nvme0n1 : 6.00 34304.33 134.00 0.00 0.00 0.00 0.00 0.00 00:10:27.309 =================================================================================================================== 00:10:27.309 Total : 34304.33 134.00 0.00 0.00 0.00 0.00 0.00 00:10:27.309 00:10:28.247 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:28.247 Nvme0n1 : 7.00 34276.71 133.89 0.00 0.00 0.00 0.00 0.00 00:10:28.247 =================================================================================================================== 00:10:28.247 Total : 34276.71 133.89 0.00 0.00 0.00 0.00 0.00 00:10:28.247 00:10:29.185 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:29.185 Nvme0n1 : 8.00 34300.12 133.98 0.00 0.00 0.00 0.00 0.00 00:10:29.185 =================================================================================================================== 00:10:29.185 Total : 34300.12 133.98 0.00 0.00 0.00 0.00 0.00 00:10:29.185 00:10:30.123 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:30.123 Nvme0n1 : 9.00 34332.44 134.11 0.00 0.00 0.00 0.00 0.00 00:10:30.123 =================================================================================================================== 00:10:30.123 Total : 34332.44 134.11 0.00 0.00 0.00 0.00 0.00 00:10:30.123 00:10:31.061 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:31.061 Nvme0n1 : 10.00 34358.50 134.21 0.00 0.00 0.00 0.00 0.00 00:10:31.061 =================================================================================================================== 00:10:31.061 Total : 34358.50 134.21 0.00 0.00 0.00 0.00 0.00 00:10:31.061 00:10:31.061 00:10:31.061 Latency(us) 00:10:31.061 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:31.061 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:31.061 Nvme0n1 : 10.00 34358.30 134.21 0.00 0.00 3722.66 2578.70 8206.25 00:10:31.061 =================================================================================================================== 00:10:31.061 Total : 34358.30 134.21 0.00 0.00 3722.66 2578.70 8206.25 00:10:31.061 0 00:10:31.061 19:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 657936 00:10:31.061 19:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 657936 ']' 00:10:31.061 19:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 657936 00:10:31.061 19:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:10:31.061 19:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:31.061 19:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 657936 00:10:31.321 19:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:31.321 19:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:31.321 19:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 657936' 00:10:31.321 killing process with pid 657936 00:10:31.321 19:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 657936 00:10:31.321 Received shutdown signal, test time was about 10.000000 seconds 00:10:31.321 00:10:31.321 Latency(us) 00:10:31.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:31.321 =================================================================================================================== 00:10:31.321 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:31.321 19:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 657936 00:10:31.321 19:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:31.580 19:01:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:31.839 19:01:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 351a487c-1422-4ddb-803f-a10090e18cd2 00:10:31.839 19:01:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:32.099 19:01:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:32.099 19:01:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:32.099 19:01:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:32.099 [2024-07-25 19:01:24.503846] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:32.099 19:01:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 351a487c-1422-4ddb-803f-a10090e18cd2 00:10:32.099 19:01:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:10:32.099 19:01:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 351a487c-1422-4ddb-803f-a10090e18cd2 00:10:32.099 19:01:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:32.099 19:01:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:32.099 19:01:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:32.099 19:01:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:32.099 19:01:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:32.099 19:01:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:32.099 19:01:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:32.099 19:01:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:10:32.100 19:01:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 351a487c-1422-4ddb-803f-a10090e18cd2 00:10:32.358 request: 00:10:32.358 { 00:10:32.358 "uuid": "351a487c-1422-4ddb-803f-a10090e18cd2", 00:10:32.358 "method": "bdev_lvol_get_lvstores", 00:10:32.358 "req_id": 1 00:10:32.358 } 00:10:32.358 Got JSON-RPC error response 00:10:32.358 response: 00:10:32.358 { 00:10:32.358 "code": -19, 00:10:32.358 "message": "No such device" 00:10:32.358 } 00:10:32.358 19:01:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:10:32.358 19:01:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:32.358 19:01:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:32.358 19:01:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:32.358 19:01:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:32.617 aio_bdev 00:10:32.617 19:01:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1eb5ed3b-dd51-4223-a3a8-9ea0f8ab77b7 00:10:32.617 19:01:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=1eb5ed3b-dd51-4223-a3a8-9ea0f8ab77b7 00:10:32.617 19:01:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:32.617 19:01:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:10:32.618 19:01:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:32.618 19:01:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:32.618 19:01:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:32.877 19:01:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1eb5ed3b-dd51-4223-a3a8-9ea0f8ab77b7 -t 2000 00:10:32.877 [ 00:10:32.877 { 00:10:32.877 "name": "1eb5ed3b-dd51-4223-a3a8-9ea0f8ab77b7", 00:10:32.877 "aliases": [ 00:10:32.877 "lvs/lvol" 00:10:32.877 ], 00:10:32.877 "product_name": "Logical Volume", 00:10:32.877 "block_size": 4096, 00:10:32.877 "num_blocks": 38912, 00:10:32.877 "uuid": "1eb5ed3b-dd51-4223-a3a8-9ea0f8ab77b7", 00:10:32.877 "assigned_rate_limits": { 00:10:32.877 "rw_ios_per_sec": 0, 00:10:32.877 "rw_mbytes_per_sec": 0, 00:10:32.877 "r_mbytes_per_sec": 0, 00:10:32.877 "w_mbytes_per_sec": 0 00:10:32.877 }, 00:10:32.877 "claimed": false, 00:10:32.877 "zoned": false, 00:10:32.877 "supported_io_types": { 00:10:32.877 "read": true, 00:10:32.877 "write": true, 00:10:32.877 "unmap": true, 00:10:32.877 "flush": false, 00:10:32.877 "reset": true, 00:10:32.877 "nvme_admin": false, 00:10:32.877 "nvme_io": false, 00:10:32.877 "nvme_io_md": false, 00:10:32.877 "write_zeroes": true, 00:10:32.877 "zcopy": false, 00:10:32.877 "get_zone_info": false, 00:10:32.877 "zone_management": false, 00:10:32.877 "zone_append": false, 00:10:32.877 "compare": false, 00:10:32.877 "compare_and_write": false, 00:10:32.877 "abort": false, 00:10:32.877 "seek_hole": true, 00:10:32.877 "seek_data": true, 00:10:32.877 "copy": false, 00:10:32.877 "nvme_iov_md": false 00:10:32.877 }, 00:10:32.877 "driver_specific": { 00:10:32.877 "lvol": { 00:10:32.877 "lvol_store_uuid": "351a487c-1422-4ddb-803f-a10090e18cd2", 00:10:32.877 "base_bdev": "aio_bdev", 00:10:32.877 "thin_provision": false, 00:10:32.877 "num_allocated_clusters": 38, 00:10:32.877 "snapshot": false, 00:10:32.877 "clone": false, 00:10:32.877 "esnap_clone": false 00:10:32.877 } 00:10:32.877 } 00:10:32.877 } 00:10:32.877 ] 00:10:32.877 19:01:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:10:32.877 19:01:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 351a487c-1422-4ddb-803f-a10090e18cd2 00:10:32.877 19:01:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:33.136 19:01:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:33.136 19:01:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 351a487c-1422-4ddb-803f-a10090e18cd2 00:10:33.136 19:01:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:33.396 19:01:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:33.396 19:01:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1eb5ed3b-dd51-4223-a3a8-9ea0f8ab77b7 00:10:33.396 19:01:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 351a487c-1422-4ddb-803f-a10090e18cd2 00:10:33.655 19:01:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:33.914 19:01:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:33.914 00:10:33.914 real 0m15.509s 00:10:33.914 user 0m15.581s 00:10:33.914 sys 0m0.978s 00:10:33.914 19:01:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:33.914 19:01:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:33.914 ************************************ 00:10:33.914 END TEST lvs_grow_clean 00:10:33.914 ************************************ 00:10:33.914 19:01:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:33.914 19:01:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:33.914 19:01:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:33.914 19:01:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:33.914 ************************************ 00:10:33.914 START TEST lvs_grow_dirty 00:10:33.914 ************************************ 00:10:33.914 19:01:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:10:33.914 19:01:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:33.914 19:01:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:33.914 19:01:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:33.914 19:01:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:33.914 19:01:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:33.914 19:01:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:33.914 19:01:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:33.914 19:01:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:33.915 19:01:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:34.174 19:01:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:34.174 19:01:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:34.433 19:01:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b2404d8d-8dd2-453f-b25c-85ae34d53f68 00:10:34.433 19:01:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2404d8d-8dd2-453f-b25c-85ae34d53f68 00:10:34.433 19:01:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:34.692 19:01:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:34.692 19:01:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:34.692 19:01:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b2404d8d-8dd2-453f-b25c-85ae34d53f68 lvol 150 00:10:34.692 19:01:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f24b589f-ea53-4071-8346-5fd4342c1a94 00:10:34.692 19:01:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:34.692 19:01:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:34.952 [2024-07-25 19:01:27.326582] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:34.952 [2024-07-25 19:01:27.326638] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:34.952 true 00:10:34.952 19:01:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2404d8d-8dd2-453f-b25c-85ae34d53f68 00:10:34.952 19:01:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:35.211 19:01:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:35.211 19:01:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:35.470 19:01:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f24b589f-ea53-4071-8346-5fd4342c1a94 00:10:35.470 19:01:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:10:35.730 [2024-07-25 19:01:28.081058] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:35.730 19:01:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:35.989 19:01:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=660647 00:10:35.989 19:01:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:35.989 19:01:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:35.989 19:01:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 660647 /var/tmp/bdevperf.sock 00:10:35.989 19:01:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 660647 ']' 00:10:35.989 19:01:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:35.989 19:01:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:35.989 19:01:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:35.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:35.989 19:01:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:35.990 19:01:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:35.990 [2024-07-25 19:01:28.310923] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:35.990 [2024-07-25 19:01:28.310976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid660647 ] 00:10:35.990 EAL: No free 2048 kB hugepages reported on node 1 00:10:35.990 [2024-07-25 19:01:28.381235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.990 [2024-07-25 19:01:28.459721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.928 19:01:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:36.928 19:01:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:10:36.928 19:01:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:37.187 Nvme0n1 00:10:37.187 19:01:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:37.187 [ 00:10:37.187 { 00:10:37.187 "name": "Nvme0n1", 00:10:37.187 "aliases": [ 00:10:37.187 "f24b589f-ea53-4071-8346-5fd4342c1a94" 00:10:37.187 ], 00:10:37.187 "product_name": "NVMe disk", 00:10:37.187 "block_size": 4096, 00:10:37.187 "num_blocks": 38912, 00:10:37.187 "uuid": "f24b589f-ea53-4071-8346-5fd4342c1a94", 00:10:37.187 "assigned_rate_limits": { 00:10:37.187 "rw_ios_per_sec": 0, 00:10:37.187 "rw_mbytes_per_sec": 0, 00:10:37.187 "r_mbytes_per_sec": 0, 00:10:37.187 "w_mbytes_per_sec": 0 00:10:37.187 }, 00:10:37.187 "claimed": false, 00:10:37.187 "zoned": false, 00:10:37.187 "supported_io_types": { 00:10:37.187 "read": true, 00:10:37.187 "write": true, 00:10:37.187 "unmap": true, 00:10:37.187 "flush": true, 00:10:37.187 "reset": true, 00:10:37.187 "nvme_admin": true, 00:10:37.187 "nvme_io": true, 00:10:37.187 "nvme_io_md": false, 00:10:37.187 "write_zeroes": true, 00:10:37.187 "zcopy": false, 00:10:37.187 "get_zone_info": false, 00:10:37.187 "zone_management": false, 00:10:37.187 "zone_append": false, 00:10:37.187 "compare": true, 00:10:37.187 "compare_and_write": true, 00:10:37.187 "abort": true, 00:10:37.187 "seek_hole": false, 00:10:37.187 "seek_data": false, 00:10:37.187 "copy": true, 00:10:37.187 "nvme_iov_md": false 00:10:37.187 }, 00:10:37.187 "memory_domains": [ 00:10:37.187 { 00:10:37.187 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:10:37.187 "dma_device_type": 0 00:10:37.187 } 00:10:37.187 ], 00:10:37.187 "driver_specific": { 00:10:37.187 "nvme": [ 00:10:37.187 { 00:10:37.187 "trid": { 00:10:37.187 "trtype": "RDMA", 00:10:37.188 "adrfam": "IPv4", 00:10:37.188 "traddr": "192.168.100.8", 00:10:37.188 "trsvcid": "4420", 00:10:37.188 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:37.188 }, 00:10:37.188 "ctrlr_data": { 00:10:37.188 "cntlid": 1, 00:10:37.188 "vendor_id": "0x8086", 00:10:37.188 "model_number": "SPDK bdev Controller", 00:10:37.188 "serial_number": "SPDK0", 00:10:37.188 "firmware_revision": "24.09", 00:10:37.188 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:37.188 "oacs": { 00:10:37.188 "security": 0, 00:10:37.188 "format": 0, 00:10:37.188 "firmware": 0, 00:10:37.188 "ns_manage": 0 00:10:37.188 }, 00:10:37.188 "multi_ctrlr": true, 00:10:37.188 "ana_reporting": false 00:10:37.188 }, 00:10:37.188 "vs": { 00:10:37.188 "nvme_version": "1.3" 00:10:37.188 }, 00:10:37.188 "ns_data": { 00:10:37.188 "id": 1, 00:10:37.188 "can_share": true 00:10:37.188 } 00:10:37.188 } 00:10:37.188 ], 00:10:37.188 "mp_policy": "active_passive" 00:10:37.188 } 00:10:37.188 } 00:10:37.188 ] 00:10:37.188 19:01:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=660890 00:10:37.188 19:01:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:37.188 19:01:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:37.447 Running I/O for 10 seconds... 00:10:38.383 Latency(us) 00:10:38.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:38.384 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:38.384 Nvme0n1 : 1.00 33251.00 129.89 0.00 0.00 0.00 0.00 0.00 00:10:38.384 =================================================================================================================== 00:10:38.384 Total : 33251.00 129.89 0.00 0.00 0.00 0.00 0.00 00:10:38.384 00:10:39.319 19:01:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b2404d8d-8dd2-453f-b25c-85ae34d53f68 00:10:39.319 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:39.319 Nvme0n1 : 2.00 33696.00 131.62 0.00 0.00 0.00 0.00 0.00 00:10:39.319 =================================================================================================================== 00:10:39.319 Total : 33696.00 131.62 0.00 0.00 0.00 0.00 0.00 00:10:39.319 00:10:39.577 true 00:10:39.577 19:01:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2404d8d-8dd2-453f-b25c-85ae34d53f68 00:10:39.577 19:01:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:39.577 19:01:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:39.577 19:01:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:39.577 19:01:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 660890 00:10:40.514 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:40.514 Nvme0n1 : 3.00 33856.67 132.25 0.00 0.00 0.00 0.00 0.00 00:10:40.514 =================================================================================================================== 00:10:40.514 Total : 33856.67 132.25 0.00 0.00 0.00 0.00 0.00 00:10:40.514 00:10:41.450 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:41.450 Nvme0n1 : 4.00 34024.75 132.91 0.00 0.00 0.00 0.00 0.00 00:10:41.450 =================================================================================================================== 00:10:41.450 Total : 34024.75 132.91 0.00 0.00 0.00 0.00 0.00 00:10:41.450 00:10:42.387 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:42.387 Nvme0n1 : 5.00 34126.40 133.31 0.00 0.00 0.00 0.00 0.00 00:10:42.387 =================================================================================================================== 00:10:42.387 Total : 34126.40 133.31 0.00 0.00 0.00 0.00 0.00 00:10:42.387 00:10:43.324 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:43.324 Nvme0n1 : 6.00 34201.67 133.60 0.00 0.00 0.00 0.00 0.00 00:10:43.324 =================================================================================================================== 00:10:43.324 Total : 34201.67 133.60 0.00 0.00 0.00 0.00 0.00 00:10:43.324 00:10:44.261 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:44.261 Nvme0n1 : 7.00 34252.71 133.80 0.00 0.00 0.00 0.00 0.00 00:10:44.261 =================================================================================================================== 00:10:44.261 Total : 34252.71 133.80 0.00 0.00 0.00 0.00 0.00 00:10:44.261 00:10:45.638 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:45.638 Nvme0n1 : 8.00 34271.38 133.87 0.00 0.00 0.00 0.00 0.00 00:10:45.638 =================================================================================================================== 00:10:45.638 Total : 34271.38 133.87 0.00 0.00 0.00 0.00 0.00 00:10:45.638 00:10:46.576 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:46.576 Nvme0n1 : 9.00 34312.00 134.03 0.00 0.00 0.00 0.00 0.00 00:10:46.576 =================================================================================================================== 00:10:46.576 Total : 34312.00 134.03 0.00 0.00 0.00 0.00 0.00 00:10:46.576 00:10:47.514 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:47.514 Nvme0n1 : 10.00 34336.60 134.13 0.00 0.00 0.00 0.00 0.00 00:10:47.514 =================================================================================================================== 00:10:47.514 Total : 34336.60 134.13 0.00 0.00 0.00 0.00 0.00 00:10:47.514 00:10:47.514 00:10:47.514 Latency(us) 00:10:47.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:47.514 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:47.514 Nvme0n1 : 10.00 34336.42 134.13 0.00 0.00 3724.93 2820.90 9630.94 00:10:47.514 =================================================================================================================== 00:10:47.514 Total : 34336.42 134.13 0.00 0.00 3724.93 2820.90 9630.94 00:10:47.514 0 00:10:47.514 19:01:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 660647 00:10:47.514 19:01:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 660647 ']' 00:10:47.514 19:01:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 660647 00:10:47.514 19:01:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:10:47.514 19:01:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:47.514 19:01:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 660647 00:10:47.514 19:01:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:47.514 19:01:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:47.514 19:01:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 660647' 00:10:47.514 killing process with pid 660647 00:10:47.514 19:01:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 660647 00:10:47.514 Received shutdown signal, test time was about 10.000000 seconds 00:10:47.514 00:10:47.514 Latency(us) 00:10:47.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:47.514 =================================================================================================================== 00:10:47.514 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:47.514 19:01:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 660647 00:10:47.774 19:01:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:47.774 19:01:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:48.033 19:01:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2404d8d-8dd2-453f-b25c-85ae34d53f68 00:10:48.033 19:01:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:48.292 19:01:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:48.292 19:01:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:48.292 19:01:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 657284 00:10:48.292 19:01:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 657284 00:10:48.292 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 657284 Killed "${NVMF_APP[@]}" "$@" 00:10:48.292 19:01:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:48.292 19:01:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:48.292 19:01:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:48.292 19:01:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:48.292 19:01:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:48.292 19:01:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=662763 00:10:48.292 19:01:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 662763 00:10:48.292 19:01:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:48.292 19:01:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 662763 ']' 00:10:48.292 19:01:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.292 19:01:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:48.292 19:01:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.292 19:01:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:48.292 19:01:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:48.292 [2024-07-25 19:01:40.721409] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:48.292 [2024-07-25 19:01:40.721459] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.292 EAL: No free 2048 kB hugepages reported on node 1 00:10:48.551 [2024-07-25 19:01:40.791261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.551 [2024-07-25 19:01:40.868391] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:48.551 [2024-07-25 19:01:40.868425] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:48.551 [2024-07-25 19:01:40.868433] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:48.551 [2024-07-25 19:01:40.868439] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:48.551 [2024-07-25 19:01:40.868444] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:48.551 [2024-07-25 19:01:40.868464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.118 19:01:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:49.118 19:01:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:10:49.118 19:01:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:49.118 19:01:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:49.118 19:01:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:49.118 19:01:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.378 19:01:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:49.378 [2024-07-25 19:01:41.761571] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:49.378 [2024-07-25 19:01:41.761653] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:49.378 [2024-07-25 19:01:41.761678] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:49.378 19:01:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:49.378 19:01:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f24b589f-ea53-4071-8346-5fd4342c1a94 00:10:49.378 19:01:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=f24b589f-ea53-4071-8346-5fd4342c1a94 00:10:49.378 19:01:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:49.378 19:01:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:10:49.378 19:01:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:49.378 19:01:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:49.378 19:01:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:49.637 19:01:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f24b589f-ea53-4071-8346-5fd4342c1a94 -t 2000 00:10:49.896 [ 00:10:49.896 { 00:10:49.896 "name": "f24b589f-ea53-4071-8346-5fd4342c1a94", 00:10:49.896 "aliases": [ 00:10:49.896 "lvs/lvol" 00:10:49.896 ], 00:10:49.896 "product_name": "Logical Volume", 00:10:49.896 "block_size": 4096, 00:10:49.896 "num_blocks": 38912, 00:10:49.896 "uuid": "f24b589f-ea53-4071-8346-5fd4342c1a94", 00:10:49.896 "assigned_rate_limits": { 00:10:49.896 "rw_ios_per_sec": 0, 00:10:49.896 "rw_mbytes_per_sec": 0, 00:10:49.896 "r_mbytes_per_sec": 0, 00:10:49.896 "w_mbytes_per_sec": 0 00:10:49.896 }, 00:10:49.896 "claimed": false, 00:10:49.896 "zoned": false, 00:10:49.896 "supported_io_types": { 00:10:49.896 "read": true, 00:10:49.896 "write": true, 00:10:49.896 "unmap": true, 00:10:49.896 "flush": false, 00:10:49.896 "reset": true, 00:10:49.896 "nvme_admin": false, 00:10:49.896 "nvme_io": false, 00:10:49.896 "nvme_io_md": false, 00:10:49.896 "write_zeroes": true, 00:10:49.896 "zcopy": false, 00:10:49.896 "get_zone_info": false, 00:10:49.896 "zone_management": false, 00:10:49.896 "zone_append": false, 00:10:49.896 "compare": false, 00:10:49.896 "compare_and_write": false, 00:10:49.896 "abort": false, 00:10:49.896 "seek_hole": true, 00:10:49.896 "seek_data": true, 00:10:49.896 "copy": false, 00:10:49.896 "nvme_iov_md": false 00:10:49.896 }, 00:10:49.896 "driver_specific": { 00:10:49.896 "lvol": { 00:10:49.896 "lvol_store_uuid": "b2404d8d-8dd2-453f-b25c-85ae34d53f68", 00:10:49.896 "base_bdev": "aio_bdev", 00:10:49.896 "thin_provision": false, 00:10:49.896 "num_allocated_clusters": 38, 00:10:49.896 "snapshot": false, 00:10:49.896 "clone": false, 00:10:49.896 "esnap_clone": false 00:10:49.896 } 00:10:49.896 } 00:10:49.896 } 00:10:49.896 ] 00:10:49.896 19:01:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:10:49.896 19:01:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2404d8d-8dd2-453f-b25c-85ae34d53f68 00:10:49.896 19:01:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:49.896 19:01:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:49.896 19:01:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2404d8d-8dd2-453f-b25c-85ae34d53f68 00:10:49.896 19:01:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:50.157 19:01:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:50.157 19:01:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:50.416 [2024-07-25 19:01:42.718296] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:50.416 19:01:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2404d8d-8dd2-453f-b25c-85ae34d53f68 00:10:50.416 19:01:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:10:50.416 19:01:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2404d8d-8dd2-453f-b25c-85ae34d53f68 00:10:50.416 19:01:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:50.416 19:01:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:50.416 19:01:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:50.416 19:01:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:50.416 19:01:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:50.416 19:01:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:50.417 19:01:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:50.417 19:01:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:10:50.417 19:01:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2404d8d-8dd2-453f-b25c-85ae34d53f68 00:10:50.676 request: 00:10:50.676 { 00:10:50.676 "uuid": "b2404d8d-8dd2-453f-b25c-85ae34d53f68", 00:10:50.676 "method": "bdev_lvol_get_lvstores", 00:10:50.676 "req_id": 1 00:10:50.676 } 00:10:50.676 Got JSON-RPC error response 00:10:50.676 response: 00:10:50.676 { 00:10:50.676 "code": -19, 00:10:50.676 "message": "No such device" 00:10:50.676 } 00:10:50.676 19:01:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:10:50.676 19:01:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:50.676 19:01:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:50.676 19:01:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:50.676 19:01:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:50.676 aio_bdev 00:10:50.676 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f24b589f-ea53-4071-8346-5fd4342c1a94 00:10:50.676 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=f24b589f-ea53-4071-8346-5fd4342c1a94 00:10:50.676 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:50.676 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:10:50.676 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:50.676 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:50.676 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:50.936 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f24b589f-ea53-4071-8346-5fd4342c1a94 -t 2000 00:10:51.194 [ 00:10:51.194 { 00:10:51.194 "name": "f24b589f-ea53-4071-8346-5fd4342c1a94", 00:10:51.194 "aliases": [ 00:10:51.194 "lvs/lvol" 00:10:51.194 ], 00:10:51.194 "product_name": "Logical Volume", 00:10:51.194 "block_size": 4096, 00:10:51.194 "num_blocks": 38912, 00:10:51.194 "uuid": "f24b589f-ea53-4071-8346-5fd4342c1a94", 00:10:51.194 "assigned_rate_limits": { 00:10:51.194 "rw_ios_per_sec": 0, 00:10:51.194 "rw_mbytes_per_sec": 0, 00:10:51.194 "r_mbytes_per_sec": 0, 00:10:51.194 "w_mbytes_per_sec": 0 00:10:51.194 }, 00:10:51.194 "claimed": false, 00:10:51.194 "zoned": false, 00:10:51.195 "supported_io_types": { 00:10:51.195 "read": true, 00:10:51.195 "write": true, 00:10:51.195 "unmap": true, 00:10:51.195 "flush": false, 00:10:51.195 "reset": true, 00:10:51.195 "nvme_admin": false, 00:10:51.195 "nvme_io": false, 00:10:51.195 "nvme_io_md": false, 00:10:51.195 "write_zeroes": true, 00:10:51.195 "zcopy": false, 00:10:51.195 "get_zone_info": false, 00:10:51.195 "zone_management": false, 00:10:51.195 "zone_append": false, 00:10:51.195 "compare": false, 00:10:51.195 "compare_and_write": false, 00:10:51.195 "abort": false, 00:10:51.195 "seek_hole": true, 00:10:51.195 "seek_data": true, 00:10:51.195 "copy": false, 00:10:51.195 "nvme_iov_md": false 00:10:51.195 }, 00:10:51.195 "driver_specific": { 00:10:51.195 "lvol": { 00:10:51.195 "lvol_store_uuid": "b2404d8d-8dd2-453f-b25c-85ae34d53f68", 00:10:51.195 "base_bdev": "aio_bdev", 00:10:51.195 "thin_provision": false, 00:10:51.195 "num_allocated_clusters": 38, 00:10:51.195 "snapshot": false, 00:10:51.195 "clone": false, 00:10:51.195 "esnap_clone": false 00:10:51.195 } 00:10:51.195 } 00:10:51.195 } 00:10:51.195 ] 00:10:51.195 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:10:51.195 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2404d8d-8dd2-453f-b25c-85ae34d53f68 00:10:51.195 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:51.453 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:51.453 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2404d8d-8dd2-453f-b25c-85ae34d53f68 00:10:51.453 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:51.453 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:51.453 19:01:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f24b589f-ea53-4071-8346-5fd4342c1a94 00:10:51.712 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b2404d8d-8dd2-453f-b25c-85ae34d53f68 00:10:51.971 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:51.971 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:52.230 00:10:52.230 real 0m18.108s 00:10:52.230 user 0m46.813s 00:10:52.230 sys 0m2.830s 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:52.230 ************************************ 00:10:52.230 END TEST lvs_grow_dirty 00:10:52.230 ************************************ 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:52.230 nvmf_trace.0 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:52.230 rmmod nvme_rdma 00:10:52.230 rmmod nvme_fabrics 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 662763 ']' 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 662763 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 662763 ']' 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 662763 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 662763 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 662763' 00:10:52.230 killing process with pid 662763 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 662763 00:10:52.230 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 662763 00:10:52.489 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:52.489 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:52.489 00:10:52.489 real 0m41.307s 00:10:52.489 user 1m8.709s 00:10:52.489 sys 0m8.655s 00:10:52.489 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:52.489 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:52.489 ************************************ 00:10:52.489 END TEST nvmf_lvs_grow 00:10:52.489 ************************************ 00:10:52.489 19:01:44 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:10:52.489 19:01:44 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:52.489 19:01:44 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:52.489 19:01:44 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:52.489 ************************************ 00:10:52.489 START TEST nvmf_bdev_io_wait 00:10:52.489 ************************************ 00:10:52.489 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:10:52.489 * Looking for test storage... 00:10:52.749 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:52.749 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:52.749 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:52.749 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:52.749 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:52.749 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:52.749 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:52.749 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:52.749 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:52.749 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:52.749 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:52.749 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:52.749 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:52.749 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:10:52.749 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:10:52.749 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:52.749 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:52.749 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:52.750 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:52.750 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:52.750 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.750 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.750 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.750 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.750 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.750 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.750 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:52.750 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.750 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:10:52.750 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:52.750 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:52.750 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:52.750 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:52.750 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:52.750 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:52.750 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:52.750 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:52.750 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:52.750 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:52.750 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:52.750 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:52.750 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:52.750 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:52.750 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:52.750 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:52.750 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.750 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.750 19:01:44 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.750 19:01:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:52.750 19:01:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:52.750 19:01:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:10:52.750 19:01:45 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:10:59.327 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:10:59.327 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:10:59.327 Found net devices under 0000:af:00.0: mlx_0_0 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:10:59.327 Found net devices under 0000:af:00.1: mlx_0_1 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # rdma_device_init 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # uname 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:59.327 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:59.328 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:59.328 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:10:59.328 altname enp175s0f0np0 00:10:59.328 altname ens801f0np0 00:10:59.328 inet 192.168.100.8/24 scope global mlx_0_0 00:10:59.328 valid_lft forever preferred_lft forever 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:59.328 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:59.328 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:10:59.328 altname enp175s0f1np1 00:10:59.328 altname ens801f1np1 00:10:59.328 inet 192.168.100.9/24 scope global mlx_0_1 00:10:59.328 valid_lft forever preferred_lft forever 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:59.328 192.168.100.9' 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:59.328 192.168.100.9' 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # head -n 1 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:59.328 192.168.100.9' 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # tail -n +2 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # head -n 1 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=666623 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 666623 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 666623 ']' 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:59.328 19:01:50 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:59.328 [2024-07-25 19:01:50.926977] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:59.328 [2024-07-25 19:01:50.927031] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:59.328 EAL: No free 2048 kB hugepages reported on node 1 00:10:59.328 [2024-07-25 19:01:50.998249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:59.328 [2024-07-25 19:01:51.078115] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:59.328 [2024-07-25 19:01:51.078152] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:59.328 [2024-07-25 19:01:51.078159] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:59.328 [2024-07-25 19:01:51.078165] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:59.328 [2024-07-25 19:01:51.078170] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:59.328 [2024-07-25 19:01:51.078226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:59.328 [2024-07-25 19:01:51.078330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:59.328 [2024-07-25 19:01:51.078451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.328 [2024-07-25 19:01:51.078452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:59.328 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:59.328 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:10:59.329 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:59.329 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:59.329 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:59.590 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:59.590 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:59.590 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.590 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:59.590 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.590 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:59.590 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.590 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:59.590 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.590 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:59.590 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.590 19:01:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:59.590 [2024-07-25 19:01:51.919434] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1609e40/0x160e330) succeed. 00:10:59.590 [2024-07-25 19:01:51.928394] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x160b480/0x164f9d0) succeed. 00:10:59.590 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.590 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:59.590 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.590 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:59.850 Malloc0 00:10:59.850 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.850 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:59.850 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.850 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:59.850 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.850 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:59.850 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.850 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:59.850 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.850 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:59.850 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.850 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:59.850 [2024-07-25 19:01:52.102011] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:59.850 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.850 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=666877 00:10:59.850 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:59.850 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:59.850 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=666879 00:10:59.850 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:59.850 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:59.850 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:59.850 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:59.850 { 00:10:59.850 "params": { 00:10:59.850 "name": "Nvme$subsystem", 00:10:59.850 "trtype": "$TEST_TRANSPORT", 00:10:59.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:59.850 "adrfam": "ipv4", 00:10:59.850 "trsvcid": "$NVMF_PORT", 00:10:59.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:59.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:59.850 "hdgst": ${hdgst:-false}, 00:10:59.850 "ddgst": ${ddgst:-false} 00:10:59.850 }, 00:10:59.850 "method": "bdev_nvme_attach_controller" 00:10:59.850 } 00:10:59.850 EOF 00:10:59.850 )") 00:10:59.850 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:59.850 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=666881 00:10:59.850 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:59.850 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:59.850 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:59.850 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:59.850 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:59.850 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:59.850 { 00:10:59.850 "params": { 00:10:59.850 "name": "Nvme$subsystem", 00:10:59.850 "trtype": "$TEST_TRANSPORT", 00:10:59.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:59.850 "adrfam": "ipv4", 00:10:59.850 "trsvcid": "$NVMF_PORT", 00:10:59.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:59.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:59.850 "hdgst": ${hdgst:-false}, 00:10:59.850 "ddgst": ${ddgst:-false} 00:10:59.850 }, 00:10:59.850 "method": "bdev_nvme_attach_controller" 00:10:59.850 } 00:10:59.850 EOF 00:10:59.850 )") 00:10:59.850 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:59.850 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=666884 00:10:59.851 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:59.851 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:59.851 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:59.851 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:59.851 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:59.851 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:59.851 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:59.851 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:59.851 { 00:10:59.851 "params": { 00:10:59.851 "name": "Nvme$subsystem", 00:10:59.851 "trtype": "$TEST_TRANSPORT", 00:10:59.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:59.851 "adrfam": "ipv4", 00:10:59.851 "trsvcid": "$NVMF_PORT", 00:10:59.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:59.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:59.851 "hdgst": ${hdgst:-false}, 00:10:59.851 "ddgst": ${ddgst:-false} 00:10:59.851 }, 00:10:59.851 "method": "bdev_nvme_attach_controller" 00:10:59.851 } 00:10:59.851 EOF 00:10:59.851 )") 00:10:59.851 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:59.851 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:59.851 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:59.851 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:59.851 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:59.851 { 00:10:59.851 "params": { 00:10:59.851 "name": "Nvme$subsystem", 00:10:59.851 "trtype": "$TEST_TRANSPORT", 00:10:59.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:59.851 "adrfam": "ipv4", 00:10:59.851 "trsvcid": "$NVMF_PORT", 00:10:59.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:59.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:59.851 "hdgst": ${hdgst:-false}, 00:10:59.851 "ddgst": ${ddgst:-false} 00:10:59.851 }, 00:10:59.851 "method": "bdev_nvme_attach_controller" 00:10:59.851 } 00:10:59.851 EOF 00:10:59.851 )") 00:10:59.851 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:59.851 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 666877 00:10:59.851 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:59.851 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:59.851 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:59.851 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:59.851 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:59.851 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:59.851 "params": { 00:10:59.851 "name": "Nvme1", 00:10:59.851 "trtype": "rdma", 00:10:59.851 "traddr": "192.168.100.8", 00:10:59.851 "adrfam": "ipv4", 00:10:59.851 "trsvcid": "4420", 00:10:59.851 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:59.851 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:59.851 "hdgst": false, 00:10:59.851 "ddgst": false 00:10:59.851 }, 00:10:59.851 "method": "bdev_nvme_attach_controller" 00:10:59.851 }' 00:10:59.851 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:59.851 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:59.851 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:59.851 "params": { 00:10:59.851 "name": "Nvme1", 00:10:59.851 "trtype": "rdma", 00:10:59.851 "traddr": "192.168.100.8", 00:10:59.851 "adrfam": "ipv4", 00:10:59.851 "trsvcid": "4420", 00:10:59.851 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:59.851 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:59.851 "hdgst": false, 00:10:59.851 "ddgst": false 00:10:59.851 }, 00:10:59.851 "method": "bdev_nvme_attach_controller" 00:10:59.851 }' 00:10:59.851 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:59.851 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:59.851 "params": { 00:10:59.851 "name": "Nvme1", 00:10:59.851 "trtype": "rdma", 00:10:59.851 "traddr": "192.168.100.8", 00:10:59.851 "adrfam": "ipv4", 00:10:59.851 "trsvcid": "4420", 00:10:59.851 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:59.851 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:59.851 "hdgst": false, 00:10:59.851 "ddgst": false 00:10:59.851 }, 00:10:59.851 "method": "bdev_nvme_attach_controller" 00:10:59.851 }' 00:10:59.851 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:59.851 19:01:52 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:59.851 "params": { 00:10:59.851 "name": "Nvme1", 00:10:59.851 "trtype": "rdma", 00:10:59.851 "traddr": "192.168.100.8", 00:10:59.851 "adrfam": "ipv4", 00:10:59.851 "trsvcid": "4420", 00:10:59.851 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:59.851 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:59.851 "hdgst": false, 00:10:59.851 "ddgst": false 00:10:59.851 }, 00:10:59.851 "method": "bdev_nvme_attach_controller" 00:10:59.851 }' 00:10:59.851 [2024-07-25 19:01:52.150406] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:59.851 [2024-07-25 19:01:52.150407] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:59.851 [2024-07-25 19:01:52.150456] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-25 19:01:52.150456] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:59.851 --proc-type=auto ] 00:10:59.851 [2024-07-25 19:01:52.150791] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:59.851 [2024-07-25 19:01:52.150826] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:59.851 [2024-07-25 19:01:52.155993] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:59.851 [2024-07-25 19:01:52.156038] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:59.851 EAL: No free 2048 kB hugepages reported on node 1 00:10:59.851 EAL: No free 2048 kB hugepages reported on node 1 00:11:00.111 [2024-07-25 19:01:52.337538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.111 EAL: No free 2048 kB hugepages reported on node 1 00:11:00.111 [2024-07-25 19:01:52.412955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:00.111 [2024-07-25 19:01:52.438060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.111 EAL: No free 2048 kB hugepages reported on node 1 00:11:00.111 [2024-07-25 19:01:52.513758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:00.111 [2024-07-25 19:01:52.529827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.370 [2024-07-25 19:01:52.590935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.370 [2024-07-25 19:01:52.620923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:11:00.370 [2024-07-25 19:01:52.666760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:00.370 Running I/O for 1 seconds... 00:11:00.370 Running I/O for 1 seconds... 00:11:00.370 Running I/O for 1 seconds... 00:11:00.370 Running I/O for 1 seconds... 00:11:01.305 00:11:01.305 Latency(us) 00:11:01.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:01.305 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:01.305 Nvme1n1 : 1.01 16861.99 65.87 0.00 0.00 7566.87 4445.05 14474.91 00:11:01.305 =================================================================================================================== 00:11:01.305 Total : 16861.99 65.87 0.00 0.00 7566.87 4445.05 14474.91 00:11:01.305 00:11:01.305 Latency(us) 00:11:01.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:01.305 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:01.305 Nvme1n1 : 1.01 13809.36 53.94 0.00 0.00 9238.26 5698.78 21085.50 00:11:01.305 =================================================================================================================== 00:11:01.305 Total : 13809.36 53.94 0.00 0.00 9238.26 5698.78 21085.50 00:11:01.305 00:11:01.305 Latency(us) 00:11:01.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:01.305 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:01.305 Nvme1n1 : 1.00 17562.88 68.61 0.00 0.00 7271.99 3490.50 17552.25 00:11:01.305 =================================================================================================================== 00:11:01.305 Total : 17562.88 68.61 0.00 0.00 7271.99 3490.50 17552.25 00:11:01.564 00:11:01.564 Latency(us) 00:11:01.564 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:01.564 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:01.564 Nvme1n1 : 1.00 245751.37 959.97 0.00 0.00 518.36 211.03 1951.83 00:11:01.564 =================================================================================================================== 00:11:01.564 Total : 245751.37 959.97 0.00 0.00 518.36 211.03 1951.83 00:11:01.564 19:01:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 666879 00:11:01.824 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 666881 00:11:01.824 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 666884 00:11:01.824 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:01.824 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.824 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:01.824 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.824 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:01.824 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:01.824 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:01.824 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:11:01.824 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:01.824 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:01.824 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:11:01.824 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:01.824 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:01.824 rmmod nvme_rdma 00:11:01.824 rmmod nvme_fabrics 00:11:01.824 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:01.824 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:11:01.824 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:11:01.824 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 666623 ']' 00:11:01.824 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 666623 00:11:01.824 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 666623 ']' 00:11:01.824 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 666623 00:11:01.824 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:11:01.824 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:01.824 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 666623 00:11:01.824 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:01.824 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:01.824 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 666623' 00:11:01.824 killing process with pid 666623 00:11:01.824 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 666623 00:11:01.824 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 666623 00:11:02.083 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:02.084 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:02.084 00:11:02.084 real 0m9.630s 00:11:02.084 user 0m20.974s 00:11:02.084 sys 0m5.793s 00:11:02.084 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:02.084 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:02.084 ************************************ 00:11:02.084 END TEST nvmf_bdev_io_wait 00:11:02.084 ************************************ 00:11:02.084 19:01:54 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:11:02.084 19:01:54 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:02.084 19:01:54 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:02.084 19:01:54 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:02.347 ************************************ 00:11:02.347 START TEST nvmf_queue_depth 00:11:02.347 ************************************ 00:11:02.347 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:11:02.347 * Looking for test storage... 00:11:02.347 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:02.347 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.347 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:02.347 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.347 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.347 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.347 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.347 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.347 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.347 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.347 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.347 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.347 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.347 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:11:02.347 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:11:02.347 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.347 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.347 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.347 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.347 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:02.347 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.347 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.347 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.347 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.347 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.347 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.347 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:02.347 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.347 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:11:02.347 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:02.347 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:02.347 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.347 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.347 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.348 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:02.348 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:02.348 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:02.348 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:02.348 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:02.348 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:02.348 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:02.348 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:02.348 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.348 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:02.348 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:02.348 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:02.348 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.348 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.348 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.348 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:02.348 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:02.348 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:11:02.348 19:01:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:11:09.054 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:11:09.054 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:11:09.054 Found net devices under 0000:af:00.0: mlx_0_0 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:11:09.054 Found net devices under 0000:af:00.1: mlx_0_1 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # rdma_device_init 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # uname 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:09.054 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:09.055 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:09.055 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:11:09.055 altname enp175s0f0np0 00:11:09.055 altname ens801f0np0 00:11:09.055 inet 192.168.100.8/24 scope global mlx_0_0 00:11:09.055 valid_lft forever preferred_lft forever 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:09.055 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:09.055 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:11:09.055 altname enp175s0f1np1 00:11:09.055 altname ens801f1np1 00:11:09.055 inet 192.168.100.9/24 scope global mlx_0_1 00:11:09.055 valid_lft forever preferred_lft forever 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:09.055 192.168.100.9' 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:09.055 192.168.100.9' 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # head -n 1 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:09.055 192.168.100.9' 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@458 -- # tail -n +2 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@458 -- # head -n 1 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=670473 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 670473 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 670473 ']' 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:09.055 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.056 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:09.056 19:02:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:09.056 [2024-07-25 19:02:00.609506] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:09.056 [2024-07-25 19:02:00.609554] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.056 EAL: No free 2048 kB hugepages reported on node 1 00:11:09.056 [2024-07-25 19:02:00.678024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.056 [2024-07-25 19:02:00.750817] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:09.056 [2024-07-25 19:02:00.750858] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:09.056 [2024-07-25 19:02:00.750865] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:09.056 [2024-07-25 19:02:00.750872] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:09.056 [2024-07-25 19:02:00.750877] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:09.056 [2024-07-25 19:02:00.750922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.056 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:09.056 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:11:09.056 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:09.056 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:09.056 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:09.056 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.056 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:09.056 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.056 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:09.056 [2024-07-25 19:02:01.505519] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1913eb0/0x19183a0) succeed. 00:11:09.056 [2024-07-25 19:02:01.514770] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x19153b0/0x1959a40) succeed. 00:11:09.315 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.315 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:09.315 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.315 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:09.315 Malloc0 00:11:09.315 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.315 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:09.315 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.315 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:09.315 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.315 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:09.315 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.315 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:09.315 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.315 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:09.315 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.315 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:09.315 [2024-07-25 19:02:01.610541] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:09.315 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.315 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=670723 00:11:09.315 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:09.315 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:09.315 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 670723 /var/tmp/bdevperf.sock 00:11:09.315 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 670723 ']' 00:11:09.315 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:09.315 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:09.315 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:09.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:09.315 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:09.315 19:02:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:09.315 [2024-07-25 19:02:01.657065] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:09.315 [2024-07-25 19:02:01.657106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid670723 ] 00:11:09.315 EAL: No free 2048 kB hugepages reported on node 1 00:11:09.315 [2024-07-25 19:02:01.727180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.574 [2024-07-25 19:02:01.804866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.142 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:10.142 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:11:10.142 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:10.142 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.142 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:10.142 NVMe0n1 00:11:10.142 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.142 19:02:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:10.401 Running I/O for 10 seconds... 00:11:20.382 00:11:20.382 Latency(us) 00:11:20.382 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:20.382 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:20.382 Verification LBA range: start 0x0 length 0x4000 00:11:20.382 NVMe0n1 : 10.05 17003.48 66.42 0.00 0.00 60048.05 19717.79 39435.58 00:11:20.382 =================================================================================================================== 00:11:20.382 Total : 17003.48 66.42 0.00 0.00 60048.05 19717.79 39435.58 00:11:20.382 0 00:11:20.382 19:02:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 670723 00:11:20.382 19:02:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 670723 ']' 00:11:20.382 19:02:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 670723 00:11:20.382 19:02:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:11:20.382 19:02:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:20.382 19:02:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 670723 00:11:20.382 19:02:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:20.382 19:02:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:20.382 19:02:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 670723' 00:11:20.382 killing process with pid 670723 00:11:20.382 19:02:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 670723 00:11:20.382 Received shutdown signal, test time was about 10.000000 seconds 00:11:20.382 00:11:20.382 Latency(us) 00:11:20.382 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:20.382 =================================================================================================================== 00:11:20.382 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:20.382 19:02:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 670723 00:11:20.641 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:20.641 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:20.641 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:20.641 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:11:20.641 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:20.641 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:20.641 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:11:20.641 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:20.641 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:20.641 rmmod nvme_rdma 00:11:20.641 rmmod nvme_fabrics 00:11:20.641 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:20.641 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:11:20.641 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:11:20.641 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 670473 ']' 00:11:20.641 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 670473 00:11:20.641 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 670473 ']' 00:11:20.641 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 670473 00:11:20.641 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:11:20.641 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:20.641 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 670473 00:11:20.900 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:20.900 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:20.900 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 670473' 00:11:20.900 killing process with pid 670473 00:11:20.900 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 670473 00:11:20.900 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 670473 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:21.158 00:11:21.158 real 0m18.795s 00:11:21.158 user 0m26.209s 00:11:21.158 sys 0m5.050s 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:21.158 ************************************ 00:11:21.158 END TEST nvmf_queue_depth 00:11:21.158 ************************************ 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:21.158 ************************************ 00:11:21.158 START TEST nvmf_target_multipath 00:11:21.158 ************************************ 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:11:21.158 * Looking for test storage... 00:11:21.158 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:11:21.158 19:02:13 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:27.727 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:27.727 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:11:27.727 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:27.727 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:27.727 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:27.727 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:27.727 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:27.727 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:11:27.727 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:27.727 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:11:27.727 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:11:27.727 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:11:27.727 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:11:27.727 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:11:27.728 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:11:27.728 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:11:27.728 Found net devices under 0000:af:00.0: mlx_0_0 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:11:27.728 Found net devices under 0000:af:00.1: mlx_0_1 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # rdma_device_init 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # uname 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:27.728 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:27.728 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:11:27.728 altname enp175s0f0np0 00:11:27.728 altname ens801f0np0 00:11:27.728 inet 192.168.100.8/24 scope global mlx_0_0 00:11:27.728 valid_lft forever preferred_lft forever 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:27.728 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:27.729 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:27.729 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:11:27.729 altname enp175s0f1np1 00:11:27.729 altname ens801f1np1 00:11:27.729 inet 192.168.100.9/24 scope global mlx_0_1 00:11:27.729 valid_lft forever preferred_lft forever 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:27.729 192.168.100.9' 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:27.729 192.168.100.9' 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # head -n 1 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:27.729 192.168.100.9' 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@458 -- # tail -n +2 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@458 -- # head -n 1 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:11:27.729 run this test only with TCP transport for now 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:27.729 rmmod nvme_rdma 00:11:27.729 rmmod nvme_fabrics 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:27.729 00:11:27.729 real 0m6.017s 00:11:27.729 user 0m1.784s 00:11:27.729 sys 0m4.375s 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:27.729 ************************************ 00:11:27.729 END TEST nvmf_target_multipath 00:11:27.729 ************************************ 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:27.729 ************************************ 00:11:27.729 START TEST nvmf_zcopy 00:11:27.729 ************************************ 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:11:27.729 * Looking for test storage... 00:11:27.729 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.729 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:11:27.730 19:02:19 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:33.005 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:11:33.005 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:11:33.006 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:11:33.006 Found net devices under 0000:af:00.0: mlx_0_0 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:11:33.006 Found net devices under 0000:af:00.1: mlx_0_1 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # rdma_device_init 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # uname 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:33.006 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:33.006 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:11:33.006 altname enp175s0f0np0 00:11:33.006 altname ens801f0np0 00:11:33.006 inet 192.168.100.8/24 scope global mlx_0_0 00:11:33.006 valid_lft forever preferred_lft forever 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:33.006 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:33.006 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:11:33.006 altname enp175s0f1np1 00:11:33.006 altname ens801f1np1 00:11:33.006 inet 192.168.100.9/24 scope global mlx_0_1 00:11:33.006 valid_lft forever preferred_lft forever 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:33.006 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:33.266 192.168.100.9' 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:33.266 192.168.100.9' 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # head -n 1 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:33.266 192.168.100.9' 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@458 -- # tail -n +2 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@458 -- # head -n 1 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=678973 00:11:33.266 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 678973 00:11:33.267 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:33.267 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 678973 ']' 00:11:33.267 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.267 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:33.267 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.267 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:33.267 19:02:25 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:33.267 [2024-07-25 19:02:25.601236] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:33.267 [2024-07-25 19:02:25.601276] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.267 EAL: No free 2048 kB hugepages reported on node 1 00:11:33.267 [2024-07-25 19:02:25.669467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.525 [2024-07-25 19:02:25.743345] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:33.526 [2024-07-25 19:02:25.743380] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:33.526 [2024-07-25 19:02:25.743387] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:33.526 [2024-07-25 19:02:25.743393] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:33.526 [2024-07-25 19:02:25.743398] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:33.526 [2024-07-25 19:02:25.743436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.094 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:34.094 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:11:34.094 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:34.094 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:34.094 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:34.094 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:34.094 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:11:34.094 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:11:34.094 Unsupported transport: rdma 00:11:34.094 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:11:34.094 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:11:34.094 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@808 -- # type=--id 00:11:34.094 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@809 -- # id=0 00:11:34.094 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:11:34.094 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:34.094 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:11:34.094 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:11:34.094 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@820 -- # for n in $shm_files 00:11:34.094 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:34.094 nvmf_trace.0 00:11:34.094 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@823 -- # return 0 00:11:34.094 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:11:34.094 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:34.094 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:11:34.094 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:34.094 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:34.094 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:11:34.094 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:34.094 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:34.094 rmmod nvme_rdma 00:11:34.094 rmmod nvme_fabrics 00:11:34.094 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:34.094 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:11:34.094 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:11:34.095 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 678973 ']' 00:11:34.095 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 678973 00:11:34.095 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 678973 ']' 00:11:34.095 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 678973 00:11:34.095 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:11:34.354 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:34.354 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 678973 00:11:34.354 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:34.354 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:34.354 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 678973' 00:11:34.354 killing process with pid 678973 00:11:34.354 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 678973 00:11:34.354 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 678973 00:11:34.354 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:34.354 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:34.354 00:11:34.354 real 0m7.250s 00:11:34.354 user 0m3.148s 00:11:34.354 sys 0m4.789s 00:11:34.354 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:34.354 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:34.354 ************************************ 00:11:34.354 END TEST nvmf_zcopy 00:11:34.354 ************************************ 00:11:34.354 19:02:26 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:11:34.354 19:02:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:34.613 19:02:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:34.613 19:02:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:34.613 ************************************ 00:11:34.613 START TEST nvmf_nmic 00:11:34.613 ************************************ 00:11:34.613 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:11:34.613 * Looking for test storage... 00:11:34.613 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:34.613 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:34.613 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:34.613 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.613 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.613 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.613 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.613 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.613 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.613 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.613 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.613 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.613 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.613 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:11:34.613 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:11:34.613 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.613 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.613 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:34.613 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.613 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:34.614 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.614 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.614 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.614 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.614 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.614 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.614 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:34.614 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.614 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:11:34.614 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:34.614 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:34.614 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.614 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.614 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.614 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:34.614 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:34.614 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:34.614 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:34.614 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:34.614 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:34.614 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:34.614 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:34.614 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:34.614 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:34.614 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:34.614 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.614 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.614 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.614 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:34.614 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:34.614 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:11:34.614 19:02:26 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:11:41.184 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:41.184 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:11:41.185 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:11:41.185 Found net devices under 0000:af:00.0: mlx_0_0 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:11:41.185 Found net devices under 0000:af:00.1: mlx_0_1 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # rdma_device_init 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # uname 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:41.185 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:41.185 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:11:41.185 altname enp175s0f0np0 00:11:41.185 altname ens801f0np0 00:11:41.185 inet 192.168.100.8/24 scope global mlx_0_0 00:11:41.185 valid_lft forever preferred_lft forever 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:41.185 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:41.185 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:11:41.185 altname enp175s0f1np1 00:11:41.185 altname ens801f1np1 00:11:41.185 inet 192.168.100.9/24 scope global mlx_0_1 00:11:41.185 valid_lft forever preferred_lft forever 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:41.185 192.168.100.9' 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:41.185 192.168.100.9' 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # head -n 1 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:41.185 192.168.100.9' 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@458 -- # tail -n +2 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@458 -- # head -n 1 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=682361 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 682361 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 682361 ']' 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:41.185 19:02:32 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:41.185 [2024-07-25 19:02:32.902064] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:41.185 [2024-07-25 19:02:32.902110] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.185 EAL: No free 2048 kB hugepages reported on node 1 00:11:41.185 [2024-07-25 19:02:32.970941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:41.185 [2024-07-25 19:02:33.048545] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.185 [2024-07-25 19:02:33.048584] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.185 [2024-07-25 19:02:33.048592] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.185 [2024-07-25 19:02:33.048598] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.185 [2024-07-25 19:02:33.048603] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.185 [2024-07-25 19:02:33.048647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.185 [2024-07-25 19:02:33.048759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:41.185 [2024-07-25 19:02:33.048775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.185 [2024-07-25 19:02:33.048781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.444 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:41.444 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:11:41.444 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:41.444 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:41.444 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:41.444 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:41.444 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:41.444 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.444 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:41.444 [2024-07-25 19:02:33.797293] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdbbdf0/0xdc02e0) succeed. 00:11:41.444 [2024-07-25 19:02:33.806725] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xdbd430/0xe01980) succeed. 00:11:41.703 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.703 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:41.703 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.703 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:41.703 Malloc0 00:11:41.703 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.703 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:41.703 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.703 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:41.703 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.703 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:41.703 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.703 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:41.703 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.703 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:41.703 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.703 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:41.703 [2024-07-25 19:02:33.974137] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:41.703 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.704 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:41.704 test case1: single bdev can't be used in multiple subsystems 00:11:41.704 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:41.704 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.704 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:41.704 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.704 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:11:41.704 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.704 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:41.704 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.704 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:41.704 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:41.704 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.704 19:02:33 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:41.704 [2024-07-25 19:02:33.997991] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:41.704 [2024-07-25 19:02:33.998010] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:41.704 [2024-07-25 19:02:33.998017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.704 request: 00:11:41.704 { 00:11:41.704 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:41.704 "namespace": { 00:11:41.704 "bdev_name": "Malloc0", 00:11:41.704 "no_auto_visible": false 00:11:41.704 }, 00:11:41.704 "method": "nvmf_subsystem_add_ns", 00:11:41.704 "req_id": 1 00:11:41.704 } 00:11:41.704 Got JSON-RPC error response 00:11:41.704 response: 00:11:41.704 { 00:11:41.704 "code": -32602, 00:11:41.704 "message": "Invalid parameters" 00:11:41.704 } 00:11:41.704 19:02:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:41.704 19:02:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:41.704 19:02:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:41.704 19:02:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:41.704 Adding namespace failed - expected result. 00:11:41.704 19:02:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:41.704 test case2: host connect to nvmf target in multiple paths 00:11:41.704 19:02:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:11:41.704 19:02:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.704 19:02:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:41.704 [2024-07-25 19:02:34.010035] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:11:41.704 19:02:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.704 19:02:34 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:44.992 19:02:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:11:48.277 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:48.277 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:11:48.277 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:48.277 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:48.277 19:02:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:11:50.181 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:50.181 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:50.181 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:50.181 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:50.181 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:50.181 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:11:50.181 19:02:42 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:50.181 [global] 00:11:50.181 thread=1 00:11:50.181 invalidate=1 00:11:50.181 rw=write 00:11:50.181 time_based=1 00:11:50.181 runtime=1 00:11:50.181 ioengine=libaio 00:11:50.181 direct=1 00:11:50.181 bs=4096 00:11:50.181 iodepth=1 00:11:50.181 norandommap=0 00:11:50.181 numjobs=1 00:11:50.181 00:11:50.181 verify_dump=1 00:11:50.181 verify_backlog=512 00:11:50.181 verify_state_save=0 00:11:50.181 do_verify=1 00:11:50.181 verify=crc32c-intel 00:11:50.181 [job0] 00:11:50.181 filename=/dev/nvme0n1 00:11:50.181 Could not set queue depth (nvme0n1) 00:11:50.181 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:50.181 fio-3.35 00:11:50.181 Starting 1 thread 00:11:51.557 00:11:51.557 job0: (groupid=0, jobs=1): err= 0: pid=684103: Thu Jul 25 19:02:43 2024 00:11:51.557 read: IOPS=7137, BW=27.9MiB/s (29.2MB/s)(27.9MiB/1001msec) 00:11:51.557 slat (nsec): min=6691, max=23282, avg=7249.06, stdev=648.55 00:11:51.557 clat (nsec): min=42947, max=80538, avg=60509.58, stdev=3727.75 00:11:51.557 lat (nsec): min=58844, max=87253, avg=67758.64, stdev=3765.19 00:11:51.557 clat percentiles (nsec): 00:11:51.557 | 1.00th=[53504], 5.00th=[54528], 10.00th=[55552], 20.00th=[57088], 00:11:51.557 | 30.00th=[58112], 40.00th=[59136], 50.00th=[60160], 60.00th=[61184], 00:11:51.557 | 70.00th=[62208], 80.00th=[63744], 90.00th=[65280], 95.00th=[67072], 00:11:51.557 | 99.00th=[69120], 99.50th=[71168], 99.90th=[75264], 99.95th=[77312], 00:11:51.557 | 99.99th=[80384] 00:11:51.557 write: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec); 0 zone resets 00:11:51.557 slat (nsec): min=8895, max=77439, avg=9629.85, stdev=1134.70 00:11:51.557 clat (nsec): min=46770, max=80737, avg=58367.32, stdev=3770.63 00:11:51.557 lat (usec): min=58, max=158, avg=68.00, stdev= 3.98 00:11:51.557 clat percentiles (nsec): 00:11:51.557 | 1.00th=[50944], 5.00th=[52480], 10.00th=[53504], 20.00th=[55040], 00:11:51.557 | 30.00th=[56064], 40.00th=[57088], 50.00th=[58112], 60.00th=[59136], 00:11:51.557 | 70.00th=[60160], 80.00th=[61696], 90.00th=[63232], 95.00th=[64768], 00:11:51.557 | 99.00th=[67072], 99.50th=[68096], 99.90th=[72192], 99.95th=[73216], 00:11:51.557 | 99.99th=[80384] 00:11:51.557 bw ( KiB/s): min=28704, max=28704, per=100.00%, avg=28704.00, stdev= 0.00, samples=1 00:11:51.557 iops : min= 7176, max= 7176, avg=7176.00, stdev= 0.00, samples=1 00:11:51.557 lat (usec) : 50=0.11%, 100=99.89% 00:11:51.557 cpu : usr=7.20%, sys=11.20%, ctx=14313, majf=0, minf=1 00:11:51.557 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:51.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.557 issued rwts: total=7145,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.557 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:51.557 00:11:51.557 Run status group 0 (all jobs): 00:11:51.557 READ: bw=27.9MiB/s (29.2MB/s), 27.9MiB/s-27.9MiB/s (29.2MB/s-29.2MB/s), io=27.9MiB (29.3MB), run=1001-1001msec 00:11:51.557 WRITE: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:11:51.557 00:11:51.557 Disk stats (read/write): 00:11:51.557 nvme0n1: ios=6255/6656, merge=0/0, ticks=350/367, in_queue=717, util=90.58% 00:11:51.557 19:02:43 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:56.829 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:56.829 rmmod nvme_rdma 00:11:56.829 rmmod nvme_fabrics 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 682361 ']' 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 682361 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 682361 ']' 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 682361 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 682361 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 682361' 00:11:56.829 killing process with pid 682361 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 682361 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 682361 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:56.829 00:11:56.829 real 0m21.972s 00:11:56.829 user 1m9.950s 00:11:56.829 sys 0m5.427s 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:56.829 ************************************ 00:11:56.829 END TEST nvmf_nmic 00:11:56.829 ************************************ 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:56.829 ************************************ 00:11:56.829 START TEST nvmf_fio_target 00:11:56.829 ************************************ 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:11:56.829 * Looking for test storage... 00:11:56.829 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.829 19:02:48 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.829 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:11:56.829 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:11:56.829 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.829 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.829 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:56.829 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:56.829 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:56.829 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.829 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.829 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.829 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.829 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.829 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.829 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:56.829 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.829 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:11:56.829 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:56.829 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:56.829 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:56.829 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.830 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.830 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:56.830 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:56.830 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:56.830 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:56.830 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:56.830 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:56.830 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:56.830 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:56.830 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:56.830 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:56.830 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:56.830 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:56.830 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.830 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.830 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.830 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:56.830 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:56.830 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:11:56.830 19:02:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:12:03.396 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:12:03.396 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:03.396 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:12:03.397 Found net devices under 0000:af:00.0: mlx_0_0 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:12:03.397 Found net devices under 0000:af:00.1: mlx_0_1 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # rdma_device_init 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # uname 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:03.397 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:03.397 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:12:03.397 altname enp175s0f0np0 00:12:03.397 altname ens801f0np0 00:12:03.397 inet 192.168.100.8/24 scope global mlx_0_0 00:12:03.397 valid_lft forever preferred_lft forever 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:03.397 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:03.397 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:12:03.397 altname enp175s0f1np1 00:12:03.397 altname ens801f1np1 00:12:03.397 inet 192.168.100.9/24 scope global mlx_0_1 00:12:03.397 valid_lft forever preferred_lft forever 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:03.397 192.168.100.9' 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:03.397 192.168.100.9' 00:12:03.397 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # head -n 1 00:12:03.398 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:03.398 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:03.398 192.168.100.9' 00:12:03.398 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@458 -- # tail -n +2 00:12:03.398 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@458 -- # head -n 1 00:12:03.398 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:03.398 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:03.398 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:03.398 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:03.398 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:03.398 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:03.398 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:03.398 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:03.398 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:03.398 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.398 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=688225 00:12:03.398 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 688225 00:12:03.398 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:03.398 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 688225 ']' 00:12:03.398 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.398 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:03.398 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.398 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:03.398 19:02:54 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.398 [2024-07-25 19:02:54.913512] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:03.398 [2024-07-25 19:02:54.913559] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.398 EAL: No free 2048 kB hugepages reported on node 1 00:12:03.398 [2024-07-25 19:02:54.982406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:03.398 [2024-07-25 19:02:55.052685] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.398 [2024-07-25 19:02:55.052725] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.398 [2024-07-25 19:02:55.052732] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:03.398 [2024-07-25 19:02:55.052738] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:03.398 [2024-07-25 19:02:55.052743] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.398 [2024-07-25 19:02:55.052819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.398 [2024-07-25 19:02:55.052954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:03.398 [2024-07-25 19:02:55.052996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.398 [2024-07-25 19:02:55.052997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:03.398 19:02:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:03.398 19:02:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:12:03.398 19:02:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:03.398 19:02:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:03.398 19:02:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.398 19:02:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:03.398 19:02:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:03.657 [2024-07-25 19:02:55.972015] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d37df0/0x1d3c2e0) succeed. 00:12:03.657 [2024-07-25 19:02:55.981583] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d39430/0x1d7d980) succeed. 00:12:03.657 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:03.914 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:03.914 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:04.172 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:04.172 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:04.431 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:04.431 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:04.689 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:04.689 19:02:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:04.948 19:02:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:04.948 19:02:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:04.948 19:02:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:05.206 19:02:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:05.206 19:02:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:05.464 19:02:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:05.464 19:02:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:05.721 19:02:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:05.979 19:02:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:05.979 19:02:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:05.979 19:02:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:05.979 19:02:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:06.237 19:02:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:06.495 [2024-07-25 19:02:58.765310] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:06.495 19:02:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:06.754 19:02:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:06.754 19:02:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:10.041 19:03:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:10.041 19:03:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:12:10.041 19:03:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:10.041 19:03:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:12:10.041 19:03:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:12:10.041 19:03:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:12:11.945 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:11.945 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:11.945 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:11.945 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:12:11.945 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:11.945 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:12:11.945 19:03:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:11.945 [global] 00:12:11.945 thread=1 00:12:11.945 invalidate=1 00:12:11.945 rw=write 00:12:11.945 time_based=1 00:12:11.945 runtime=1 00:12:11.945 ioengine=libaio 00:12:11.945 direct=1 00:12:11.945 bs=4096 00:12:11.945 iodepth=1 00:12:11.945 norandommap=0 00:12:11.945 numjobs=1 00:12:11.945 00:12:11.945 verify_dump=1 00:12:11.945 verify_backlog=512 00:12:11.945 verify_state_save=0 00:12:11.945 do_verify=1 00:12:11.945 verify=crc32c-intel 00:12:11.945 [job0] 00:12:11.945 filename=/dev/nvme0n1 00:12:11.945 [job1] 00:12:11.945 filename=/dev/nvme0n2 00:12:11.945 [job2] 00:12:11.945 filename=/dev/nvme0n3 00:12:11.945 [job3] 00:12:11.945 filename=/dev/nvme0n4 00:12:12.217 Could not set queue depth (nvme0n1) 00:12:12.217 Could not set queue depth (nvme0n2) 00:12:12.217 Could not set queue depth (nvme0n3) 00:12:12.217 Could not set queue depth (nvme0n4) 00:12:12.474 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:12.474 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:12.474 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:12.474 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:12.474 fio-3.35 00:12:12.474 Starting 4 threads 00:12:13.846 00:12:13.846 job0: (groupid=0, jobs=1): err= 0: pid=690194: Thu Jul 25 19:03:05 2024 00:12:13.846 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:12:13.846 slat (nsec): min=6215, max=23732, avg=7298.05, stdev=938.47 00:12:13.846 clat (usec): min=64, max=348, avg=128.20, stdev=16.21 00:12:13.846 lat (usec): min=72, max=356, avg=135.50, stdev=16.22 00:12:13.846 clat percentiles (usec): 00:12:13.846 | 1.00th=[ 81], 5.00th=[ 94], 10.00th=[ 118], 20.00th=[ 122], 00:12:13.846 | 30.00th=[ 124], 40.00th=[ 126], 50.00th=[ 128], 60.00th=[ 130], 00:12:13.846 | 70.00th=[ 133], 80.00th=[ 135], 90.00th=[ 141], 95.00th=[ 161], 00:12:13.846 | 99.00th=[ 176], 99.50th=[ 178], 99.90th=[ 190], 99.95th=[ 200], 00:12:13.846 | 99.99th=[ 351] 00:12:13.846 write: IOPS=3833, BW=15.0MiB/s (15.7MB/s)(15.0MiB/1001msec); 0 zone resets 00:12:13.846 slat (nsec): min=8180, max=41301, avg=9273.43, stdev=1017.69 00:12:13.846 clat (usec): min=61, max=204, avg=121.05, stdev=19.26 00:12:13.846 lat (usec): min=71, max=213, avg=130.32, stdev=19.31 00:12:13.846 clat percentiles (usec): 00:12:13.846 | 1.00th=[ 74], 5.00th=[ 82], 10.00th=[ 105], 20.00th=[ 113], 00:12:13.846 | 30.00th=[ 116], 40.00th=[ 118], 50.00th=[ 119], 60.00th=[ 121], 00:12:13.846 | 70.00th=[ 124], 80.00th=[ 130], 90.00th=[ 151], 95.00th=[ 159], 00:12:13.846 | 99.00th=[ 169], 99.50th=[ 174], 99.90th=[ 192], 99.95th=[ 200], 00:12:13.846 | 99.99th=[ 204] 00:12:13.846 bw ( KiB/s): min=16384, max=16384, per=26.92%, avg=16384.00, stdev= 0.00, samples=1 00:12:13.846 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:12:13.846 lat (usec) : 100=6.86%, 250=93.13%, 500=0.01% 00:12:13.846 cpu : usr=4.20%, sys=8.30%, ctx=7421, majf=0, minf=1 00:12:13.846 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:13.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:13.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:13.846 issued rwts: total=3584,3837,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:13.846 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:13.846 job1: (groupid=0, jobs=1): err= 0: pid=690195: Thu Jul 25 19:03:05 2024 00:12:13.846 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:12:13.846 slat (nsec): min=6570, max=27037, avg=7507.29, stdev=851.18 00:12:13.846 clat (usec): min=71, max=242, avg=128.40, stdev=12.00 00:12:13.846 lat (usec): min=78, max=250, avg=135.91, stdev=11.99 00:12:13.846 clat percentiles (usec): 00:12:13.846 | 1.00th=[ 85], 5.00th=[ 115], 10.00th=[ 120], 20.00th=[ 123], 00:12:13.846 | 30.00th=[ 125], 40.00th=[ 127], 50.00th=[ 128], 60.00th=[ 130], 00:12:13.846 | 70.00th=[ 133], 80.00th=[ 135], 90.00th=[ 139], 95.00th=[ 147], 00:12:13.846 | 99.00th=[ 169], 99.50th=[ 174], 99.90th=[ 200], 99.95th=[ 221], 00:12:13.846 | 99.99th=[ 243] 00:12:13.846 write: IOPS=3760, BW=14.7MiB/s (15.4MB/s)(14.7MiB/1001msec); 0 zone resets 00:12:13.846 slat (nsec): min=8362, max=34879, avg=9375.39, stdev=1011.37 00:12:13.846 clat (usec): min=66, max=197, avg=123.14, stdev=15.85 00:12:13.846 lat (usec): min=75, max=231, avg=132.52, stdev=15.90 00:12:13.846 clat percentiles (usec): 00:12:13.846 | 1.00th=[ 79], 5.00th=[ 105], 10.00th=[ 112], 20.00th=[ 115], 00:12:13.846 | 30.00th=[ 118], 40.00th=[ 120], 50.00th=[ 121], 60.00th=[ 123], 00:12:13.846 | 70.00th=[ 126], 80.00th=[ 130], 90.00th=[ 143], 95.00th=[ 159], 00:12:13.846 | 99.00th=[ 172], 99.50th=[ 178], 99.90th=[ 196], 99.95th=[ 198], 00:12:13.846 | 99.99th=[ 198] 00:12:13.846 bw ( KiB/s): min=16384, max=16384, per=26.92%, avg=16384.00, stdev= 0.00, samples=1 00:12:13.846 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:12:13.846 lat (usec) : 100=3.29%, 250=96.71% 00:12:13.846 cpu : usr=3.50%, sys=8.90%, ctx=7348, majf=0, minf=1 00:12:13.846 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:13.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:13.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:13.846 issued rwts: total=3584,3764,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:13.846 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:13.846 job2: (groupid=0, jobs=1): err= 0: pid=690196: Thu Jul 25 19:03:05 2024 00:12:13.846 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:12:13.846 slat (nsec): min=5477, max=43548, avg=9954.75, stdev=3521.93 00:12:13.846 clat (usec): min=71, max=183, avg=123.45, stdev=12.07 00:12:13.846 lat (usec): min=85, max=190, avg=133.41, stdev=12.20 00:12:13.847 clat percentiles (usec): 00:12:13.847 | 1.00th=[ 84], 5.00th=[ 95], 10.00th=[ 114], 20.00th=[ 119], 00:12:13.847 | 30.00th=[ 121], 40.00th=[ 124], 50.00th=[ 125], 60.00th=[ 127], 00:12:13.847 | 70.00th=[ 129], 80.00th=[ 131], 90.00th=[ 135], 95.00th=[ 139], 00:12:13.847 | 99.00th=[ 159], 99.50th=[ 163], 99.90th=[ 172], 99.95th=[ 174], 00:12:13.847 | 99.99th=[ 184] 00:12:13.847 write: IOPS=3790, BW=14.8MiB/s (15.5MB/s)(14.8MiB/1001msec); 0 zone resets 00:12:13.847 slat (nsec): min=6465, max=43925, avg=11946.11, stdev=3569.20 00:12:13.847 clat (usec): min=68, max=200, avg=120.30, stdev=14.69 00:12:13.847 lat (usec): min=75, max=223, avg=132.25, stdev=14.43 00:12:13.847 clat percentiles (usec): 00:12:13.847 | 1.00th=[ 86], 5.00th=[ 105], 10.00th=[ 109], 20.00th=[ 113], 00:12:13.847 | 30.00th=[ 115], 40.00th=[ 117], 50.00th=[ 118], 60.00th=[ 120], 00:12:13.847 | 70.00th=[ 123], 80.00th=[ 128], 90.00th=[ 141], 95.00th=[ 147], 00:12:13.847 | 99.00th=[ 188], 99.50th=[ 192], 99.90th=[ 198], 99.95th=[ 200], 00:12:13.847 | 99.99th=[ 202] 00:12:13.847 bw ( KiB/s): min=16384, max=16384, per=26.92%, avg=16384.00, stdev= 0.00, samples=1 00:12:13.847 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:12:13.847 lat (usec) : 100=4.72%, 250=95.28% 00:12:13.847 cpu : usr=4.40%, sys=9.40%, ctx=7378, majf=0, minf=2 00:12:13.847 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:13.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:13.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:13.847 issued rwts: total=3584,3794,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:13.847 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:13.847 job3: (groupid=0, jobs=1): err= 0: pid=690197: Thu Jul 25 19:03:05 2024 00:12:13.847 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:12:13.847 slat (nsec): min=6332, max=22303, avg=7361.23, stdev=847.84 00:12:13.847 clat (usec): min=75, max=346, avg=128.21, stdev=13.82 00:12:13.847 lat (usec): min=82, max=353, avg=135.57, stdev=13.81 00:12:13.847 clat percentiles (usec): 00:12:13.847 | 1.00th=[ 90], 5.00th=[ 106], 10.00th=[ 119], 20.00th=[ 123], 00:12:13.847 | 30.00th=[ 125], 40.00th=[ 126], 50.00th=[ 128], 60.00th=[ 130], 00:12:13.847 | 70.00th=[ 131], 80.00th=[ 135], 90.00th=[ 139], 95.00th=[ 155], 00:12:13.847 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 210], 99.95th=[ 217], 00:12:13.847 | 99.99th=[ 347] 00:12:13.847 write: IOPS=3830, BW=15.0MiB/s (15.7MB/s)(15.0MiB/1001msec); 0 zone resets 00:12:13.847 slat (nsec): min=8249, max=38110, avg=9350.62, stdev=977.89 00:12:13.847 clat (usec): min=69, max=206, avg=121.02, stdev=15.81 00:12:13.847 lat (usec): min=78, max=232, avg=130.37, stdev=15.94 00:12:13.847 clat percentiles (usec): 00:12:13.847 | 1.00th=[ 82], 5.00th=[ 92], 10.00th=[ 110], 20.00th=[ 114], 00:12:13.847 | 30.00th=[ 116], 40.00th=[ 118], 50.00th=[ 120], 60.00th=[ 122], 00:12:13.847 | 70.00th=[ 124], 80.00th=[ 129], 90.00th=[ 143], 95.00th=[ 149], 00:12:13.847 | 99.00th=[ 184], 99.50th=[ 190], 99.90th=[ 198], 99.95th=[ 198], 00:12:13.847 | 99.99th=[ 206] 00:12:13.847 bw ( KiB/s): min=16384, max=16384, per=26.92%, avg=16384.00, stdev= 0.00, samples=1 00:12:13.847 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:12:13.847 lat (usec) : 100=5.38%, 250=94.61%, 500=0.01% 00:12:13.847 cpu : usr=4.20%, sys=8.30%, ctx=7418, majf=0, minf=1 00:12:13.847 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:13.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:13.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:13.847 issued rwts: total=3584,3834,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:13.847 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:13.847 00:12:13.847 Run status group 0 (all jobs): 00:12:13.847 READ: bw=55.9MiB/s (58.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=56.0MiB (58.7MB), run=1001-1001msec 00:12:13.847 WRITE: bw=59.4MiB/s (62.3MB/s), 14.7MiB/s-15.0MiB/s (15.4MB/s-15.7MB/s), io=59.5MiB (62.4MB), run=1001-1001msec 00:12:13.847 00:12:13.847 Disk stats (read/write): 00:12:13.847 nvme0n1: ios=3122/3287, merge=0/0, ticks=397/383, in_queue=780, util=87.17% 00:12:13.847 nvme0n2: ios=3072/3228, merge=0/0, ticks=377/373, in_queue=750, util=87.32% 00:12:13.847 nvme0n3: ios=3072/3261, merge=0/0, ticks=353/358, in_queue=711, util=89.22% 00:12:13.847 nvme0n4: ios=3072/3285, merge=0/0, ticks=375/374, in_queue=749, util=89.78% 00:12:13.847 19:03:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:13.847 [global] 00:12:13.847 thread=1 00:12:13.847 invalidate=1 00:12:13.847 rw=randwrite 00:12:13.847 time_based=1 00:12:13.847 runtime=1 00:12:13.847 ioengine=libaio 00:12:13.847 direct=1 00:12:13.847 bs=4096 00:12:13.847 iodepth=1 00:12:13.847 norandommap=0 00:12:13.847 numjobs=1 00:12:13.847 00:12:13.847 verify_dump=1 00:12:13.847 verify_backlog=512 00:12:13.847 verify_state_save=0 00:12:13.847 do_verify=1 00:12:13.847 verify=crc32c-intel 00:12:13.847 [job0] 00:12:13.847 filename=/dev/nvme0n1 00:12:13.847 [job1] 00:12:13.847 filename=/dev/nvme0n2 00:12:13.847 [job2] 00:12:13.847 filename=/dev/nvme0n3 00:12:13.847 [job3] 00:12:13.847 filename=/dev/nvme0n4 00:12:13.847 Could not set queue depth (nvme0n1) 00:12:13.847 Could not set queue depth (nvme0n2) 00:12:13.847 Could not set queue depth (nvme0n3) 00:12:13.847 Could not set queue depth (nvme0n4) 00:12:13.847 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:13.847 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:13.847 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:13.847 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:13.847 fio-3.35 00:12:13.847 Starting 4 threads 00:12:15.218 00:12:15.218 job0: (groupid=0, jobs=1): err= 0: pid=690569: Thu Jul 25 19:03:07 2024 00:12:15.218 read: IOPS=3876, BW=15.1MiB/s (15.9MB/s)(15.2MiB/1001msec) 00:12:15.218 slat (nsec): min=6330, max=36088, avg=8202.62, stdev=2254.95 00:12:15.218 clat (usec): min=61, max=191, avg=116.47, stdev=21.63 00:12:15.218 lat (usec): min=68, max=198, avg=124.67, stdev=21.67 00:12:15.218 clat percentiles (usec): 00:12:15.218 | 1.00th=[ 70], 5.00th=[ 77], 10.00th=[ 84], 20.00th=[ 104], 00:12:15.218 | 30.00th=[ 111], 40.00th=[ 115], 50.00th=[ 118], 60.00th=[ 121], 00:12:15.218 | 70.00th=[ 125], 80.00th=[ 129], 90.00th=[ 139], 95.00th=[ 159], 00:12:15.218 | 99.00th=[ 176], 99.50th=[ 180], 99.90th=[ 188], 99.95th=[ 190], 00:12:15.218 | 99.99th=[ 192] 00:12:15.218 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:12:15.219 slat (nsec): min=8100, max=38507, avg=10109.60, stdev=2415.87 00:12:15.219 clat (usec): min=57, max=188, avg=111.44, stdev=22.83 00:12:15.219 lat (usec): min=67, max=198, avg=121.55, stdev=23.15 00:12:15.219 clat percentiles (usec): 00:12:15.219 | 1.00th=[ 65], 5.00th=[ 70], 10.00th=[ 75], 20.00th=[ 94], 00:12:15.219 | 30.00th=[ 108], 40.00th=[ 112], 50.00th=[ 116], 60.00th=[ 119], 00:12:15.219 | 70.00th=[ 122], 80.00th=[ 126], 90.00th=[ 135], 95.00th=[ 153], 00:12:15.219 | 99.00th=[ 167], 99.50th=[ 172], 99.90th=[ 176], 99.95th=[ 180], 00:12:15.219 | 99.99th=[ 190] 00:12:15.219 bw ( KiB/s): min=16616, max=16616, per=22.40%, avg=16616.00, stdev= 0.00, samples=1 00:12:15.219 iops : min= 4154, max= 4154, avg=4154.00, stdev= 0.00, samples=1 00:12:15.219 lat (usec) : 100=19.70%, 250=80.30% 00:12:15.219 cpu : usr=4.20%, sys=9.50%, ctx=7976, majf=0, minf=1 00:12:15.219 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:15.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.219 issued rwts: total=3880,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.219 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:15.219 job1: (groupid=0, jobs=1): err= 0: pid=690570: Thu Jul 25 19:03:07 2024 00:12:15.219 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:12:15.219 slat (nsec): min=6376, max=42500, avg=8367.19, stdev=2466.91 00:12:15.219 clat (usec): min=64, max=186, avg=107.31, stdev=20.58 00:12:15.219 lat (usec): min=75, max=193, avg=115.68, stdev=20.44 00:12:15.219 clat percentiles (usec): 00:12:15.219 | 1.00th=[ 73], 5.00th=[ 77], 10.00th=[ 79], 20.00th=[ 84], 00:12:15.219 | 30.00th=[ 92], 40.00th=[ 105], 50.00th=[ 113], 60.00th=[ 117], 00:12:15.219 | 70.00th=[ 121], 80.00th=[ 125], 90.00th=[ 130], 95.00th=[ 137], 00:12:15.219 | 99.00th=[ 161], 99.50th=[ 165], 99.90th=[ 176], 99.95th=[ 176], 00:12:15.219 | 99.99th=[ 186] 00:12:15.219 write: IOPS=4376, BW=17.1MiB/s (17.9MB/s)(17.1MiB/1001msec); 0 zone resets 00:12:15.219 slat (nsec): min=8579, max=39472, avg=10694.97, stdev=2609.21 00:12:15.219 clat (usec): min=60, max=201, avg=104.17, stdev=22.20 00:12:15.219 lat (usec): min=71, max=211, avg=114.87, stdev=22.07 00:12:15.219 clat percentiles (usec): 00:12:15.219 | 1.00th=[ 69], 5.00th=[ 73], 10.00th=[ 76], 20.00th=[ 80], 00:12:15.219 | 30.00th=[ 85], 40.00th=[ 102], 50.00th=[ 111], 60.00th=[ 115], 00:12:15.219 | 70.00th=[ 119], 80.00th=[ 123], 90.00th=[ 128], 95.00th=[ 135], 00:12:15.219 | 99.00th=[ 161], 99.50th=[ 165], 99.90th=[ 176], 99.95th=[ 178], 00:12:15.219 | 99.99th=[ 202] 00:12:15.219 bw ( KiB/s): min=20480, max=20480, per=27.61%, avg=20480.00, stdev= 0.00, samples=1 00:12:15.219 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:12:15.219 lat (usec) : 100=37.34%, 250=62.66% 00:12:15.219 cpu : usr=5.80%, sys=8.80%, ctx=8477, majf=0, minf=1 00:12:15.219 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:15.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.219 issued rwts: total=4096,4381,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.219 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:15.219 job2: (groupid=0, jobs=1): err= 0: pid=690571: Thu Jul 25 19:03:07 2024 00:12:15.219 read: IOPS=4899, BW=19.1MiB/s (20.1MB/s)(19.2MiB/1001msec) 00:12:15.219 slat (nsec): min=6292, max=23296, avg=7533.66, stdev=858.56 00:12:15.219 clat (usec): min=69, max=179, avg=91.23, stdev=11.90 00:12:15.219 lat (usec): min=77, max=186, avg=98.77, stdev=11.89 00:12:15.219 clat percentiles (usec): 00:12:15.219 | 1.00th=[ 78], 5.00th=[ 80], 10.00th=[ 82], 20.00th=[ 84], 00:12:15.219 | 30.00th=[ 86], 40.00th=[ 87], 50.00th=[ 89], 60.00th=[ 91], 00:12:15.219 | 70.00th=[ 93], 80.00th=[ 96], 90.00th=[ 102], 95.00th=[ 121], 00:12:15.219 | 99.00th=[ 135], 99.50th=[ 153], 99.90th=[ 169], 99.95th=[ 172], 00:12:15.219 | 99.99th=[ 180] 00:12:15.219 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:12:15.219 slat (nsec): min=8124, max=33777, avg=9058.78, stdev=941.10 00:12:15.219 clat (usec): min=66, max=174, avg=87.97, stdev=12.45 00:12:15.219 lat (usec): min=75, max=182, avg=97.03, stdev=12.50 00:12:15.219 clat percentiles (usec): 00:12:15.219 | 1.00th=[ 74], 5.00th=[ 77], 10.00th=[ 79], 20.00th=[ 80], 00:12:15.219 | 30.00th=[ 82], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 87], 00:12:15.219 | 70.00th=[ 89], 80.00th=[ 92], 90.00th=[ 103], 95.00th=[ 118], 00:12:15.219 | 99.00th=[ 133], 99.50th=[ 145], 99.90th=[ 161], 99.95th=[ 167], 00:12:15.219 | 99.99th=[ 176] 00:12:15.219 bw ( KiB/s): min=20480, max=20480, per=27.61%, avg=20480.00, stdev= 0.00, samples=1 00:12:15.219 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:12:15.219 lat (usec) : 100=88.24%, 250=11.76% 00:12:15.219 cpu : usr=5.80%, sys=10.80%, ctx=10024, majf=0, minf=1 00:12:15.219 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:15.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.219 issued rwts: total=4904,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.219 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:15.219 job3: (groupid=0, jobs=1): err= 0: pid=690572: Thu Jul 25 19:03:07 2024 00:12:15.219 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:12:15.219 slat (nsec): min=6847, max=24417, avg=7748.61, stdev=942.73 00:12:15.219 clat (usec): min=69, max=298, avg=96.57, stdev=18.45 00:12:15.219 lat (usec): min=77, max=306, avg=104.31, stdev=18.44 00:12:15.219 clat percentiles (usec): 00:12:15.219 | 1.00th=[ 77], 5.00th=[ 80], 10.00th=[ 82], 20.00th=[ 84], 00:12:15.219 | 30.00th=[ 86], 40.00th=[ 88], 50.00th=[ 90], 60.00th=[ 92], 00:12:15.219 | 70.00th=[ 96], 80.00th=[ 115], 90.00th=[ 127], 95.00th=[ 133], 00:12:15.219 | 99.00th=[ 155], 99.50th=[ 163], 99.90th=[ 176], 99.95th=[ 182], 00:12:15.219 | 99.99th=[ 297] 00:12:15.219 write: IOPS=4960, BW=19.4MiB/s (20.3MB/s)(19.4MiB/1001msec); 0 zone resets 00:12:15.219 slat (nsec): min=8252, max=41599, avg=9261.71, stdev=1394.90 00:12:15.219 clat (usec): min=66, max=184, avg=91.59, stdev=16.76 00:12:15.219 lat (usec): min=75, max=193, avg=100.85, stdev=17.01 00:12:15.219 clat percentiles (usec): 00:12:15.219 | 1.00th=[ 74], 5.00th=[ 77], 10.00th=[ 78], 20.00th=[ 81], 00:12:15.219 | 30.00th=[ 82], 40.00th=[ 84], 50.00th=[ 86], 60.00th=[ 88], 00:12:15.219 | 70.00th=[ 91], 80.00th=[ 102], 90.00th=[ 120], 95.00th=[ 128], 00:12:15.219 | 99.00th=[ 147], 99.50th=[ 155], 99.90th=[ 169], 99.95th=[ 169], 00:12:15.219 | 99.99th=[ 186] 00:12:15.219 bw ( KiB/s): min=18808, max=18808, per=25.36%, avg=18808.00, stdev= 0.00, samples=1 00:12:15.219 iops : min= 4702, max= 4702, avg=4702.00, stdev= 0.00, samples=1 00:12:15.219 lat (usec) : 100=76.75%, 250=23.24%, 500=0.01% 00:12:15.219 cpu : usr=5.80%, sys=10.30%, ctx=9573, majf=0, minf=1 00:12:15.219 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:15.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.219 issued rwts: total=4608,4965,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.219 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:15.219 00:12:15.219 Run status group 0 (all jobs): 00:12:15.219 READ: bw=68.2MiB/s (71.6MB/s), 15.1MiB/s-19.1MiB/s (15.9MB/s-20.1MB/s), io=68.3MiB (71.6MB), run=1001-1001msec 00:12:15.219 WRITE: bw=72.4MiB/s (76.0MB/s), 16.0MiB/s-20.0MiB/s (16.8MB/s-20.9MB/s), io=72.5MiB (76.0MB), run=1001-1001msec 00:12:15.219 00:12:15.219 Disk stats (read/write): 00:12:15.219 nvme0n1: ios=3367/3584, merge=0/0, ticks=388/369, in_queue=757, util=87.17% 00:12:15.219 nvme0n2: ios=3584/3811, merge=0/0, ticks=349/372, in_queue=721, util=87.44% 00:12:15.219 nvme0n3: ios=4096/4490, merge=0/0, ticks=344/377, in_queue=721, util=89.25% 00:12:15.219 nvme0n4: ios=4047/4096, merge=0/0, ticks=375/356, in_queue=731, util=89.80% 00:12:15.219 19:03:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:15.219 [global] 00:12:15.219 thread=1 00:12:15.219 invalidate=1 00:12:15.219 rw=write 00:12:15.219 time_based=1 00:12:15.219 runtime=1 00:12:15.220 ioengine=libaio 00:12:15.220 direct=1 00:12:15.220 bs=4096 00:12:15.220 iodepth=128 00:12:15.220 norandommap=0 00:12:15.220 numjobs=1 00:12:15.220 00:12:15.220 verify_dump=1 00:12:15.220 verify_backlog=512 00:12:15.220 verify_state_save=0 00:12:15.220 do_verify=1 00:12:15.220 verify=crc32c-intel 00:12:15.220 [job0] 00:12:15.220 filename=/dev/nvme0n1 00:12:15.220 [job1] 00:12:15.220 filename=/dev/nvme0n2 00:12:15.220 [job2] 00:12:15.220 filename=/dev/nvme0n3 00:12:15.220 [job3] 00:12:15.220 filename=/dev/nvme0n4 00:12:15.220 Could not set queue depth (nvme0n1) 00:12:15.220 Could not set queue depth (nvme0n2) 00:12:15.220 Could not set queue depth (nvme0n3) 00:12:15.220 Could not set queue depth (nvme0n4) 00:12:15.477 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:15.477 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:15.477 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:15.477 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:15.477 fio-3.35 00:12:15.477 Starting 4 threads 00:12:16.880 00:12:16.880 job0: (groupid=0, jobs=1): err= 0: pid=690980: Thu Jul 25 19:03:09 2024 00:12:16.880 read: IOPS=9152, BW=35.8MiB/s (37.5MB/s)(35.9MiB/1003msec) 00:12:16.880 slat (nsec): min=1486, max=1872.3k, avg=54483.00, stdev=201984.83 00:12:16.880 clat (usec): min=1980, max=9126, avg=7113.17, stdev=527.92 00:12:16.880 lat (usec): min=2800, max=9222, avg=7167.65, stdev=519.02 00:12:16.880 clat percentiles (usec): 00:12:16.880 | 1.00th=[ 5866], 5.00th=[ 6456], 10.00th=[ 6652], 20.00th=[ 6783], 00:12:16.880 | 30.00th=[ 6849], 40.00th=[ 6915], 50.00th=[ 7046], 60.00th=[ 7177], 00:12:16.880 | 70.00th=[ 7504], 80.00th=[ 7635], 90.00th=[ 7701], 95.00th=[ 7832], 00:12:16.880 | 99.00th=[ 7963], 99.50th=[ 8094], 99.90th=[ 8356], 99.95th=[ 9110], 00:12:16.880 | 99.99th=[ 9110] 00:12:16.880 write: IOPS=9188, BW=35.9MiB/s (37.6MB/s)(36.0MiB/1003msec); 0 zone resets 00:12:16.880 slat (usec): min=2, max=2159, avg=51.77, stdev=192.12 00:12:16.880 clat (usec): min=5321, max=9188, avg=6707.21, stdev=465.72 00:12:16.880 lat (usec): min=5336, max=9212, avg=6758.97, stdev=458.65 00:12:16.880 clat percentiles (usec): 00:12:16.880 | 1.00th=[ 5604], 5.00th=[ 6063], 10.00th=[ 6194], 20.00th=[ 6325], 00:12:16.880 | 30.00th=[ 6390], 40.00th=[ 6456], 50.00th=[ 6587], 60.00th=[ 6783], 00:12:16.880 | 70.00th=[ 7046], 80.00th=[ 7242], 90.00th=[ 7308], 95.00th=[ 7373], 00:12:16.880 | 99.00th=[ 7570], 99.50th=[ 7570], 99.90th=[ 7767], 99.95th=[ 8160], 00:12:16.880 | 99.99th=[ 9241] 00:12:16.880 bw ( KiB/s): min=36864, max=36864, per=32.12%, avg=36864.00, stdev= 0.00, samples=2 00:12:16.880 iops : min= 9216, max= 9216, avg=9216.00, stdev= 0.00, samples=2 00:12:16.880 lat (msec) : 2=0.01%, 4=0.17%, 10=99.82% 00:12:16.880 cpu : usr=3.49%, sys=5.09%, ctx=1154, majf=0, minf=1 00:12:16.880 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:12:16.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.880 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:16.880 issued rwts: total=9180,9216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:16.880 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:16.880 job1: (groupid=0, jobs=1): err= 0: pid=690997: Thu Jul 25 19:03:09 2024 00:12:16.880 read: IOPS=8686, BW=33.9MiB/s (35.6MB/s)(34.0MiB/1002msec) 00:12:16.880 slat (nsec): min=1431, max=1285.6k, avg=57727.23, stdev=207510.08 00:12:16.880 clat (usec): min=5418, max=10022, avg=7429.10, stdev=826.79 00:12:16.880 lat (usec): min=5600, max=10033, avg=7486.82, stdev=843.57 00:12:16.880 clat percentiles (usec): 00:12:16.880 | 1.00th=[ 6128], 5.00th=[ 6390], 10.00th=[ 6521], 20.00th=[ 6652], 00:12:16.880 | 30.00th=[ 6783], 40.00th=[ 6980], 50.00th=[ 7177], 60.00th=[ 7439], 00:12:16.880 | 70.00th=[ 8160], 80.00th=[ 8356], 90.00th=[ 8586], 95.00th=[ 8717], 00:12:16.880 | 99.00th=[ 9372], 99.50th=[ 9503], 99.90th=[ 9634], 99.95th=[ 9765], 00:12:16.880 | 99.99th=[10028] 00:12:16.880 write: IOPS=8787, BW=34.3MiB/s (36.0MB/s)(34.4MiB/1002msec); 0 zone resets 00:12:16.880 slat (nsec): min=1995, max=1837.5k, avg=54104.01, stdev=193392.75 00:12:16.880 clat (usec): min=447, max=9559, avg=7051.75, stdev=926.66 00:12:16.880 lat (usec): min=1205, max=9561, avg=7105.86, stdev=941.45 00:12:16.880 clat percentiles (usec): 00:12:16.880 | 1.00th=[ 4948], 5.00th=[ 6063], 10.00th=[ 6128], 20.00th=[ 6259], 00:12:16.880 | 30.00th=[ 6390], 40.00th=[ 6587], 50.00th=[ 6849], 60.00th=[ 7439], 00:12:16.880 | 70.00th=[ 7701], 80.00th=[ 7963], 90.00th=[ 8225], 95.00th=[ 8455], 00:12:16.880 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[ 9372], 99.95th=[ 9503], 00:12:16.880 | 99.99th=[ 9503] 00:12:16.880 bw ( KiB/s): min=32768, max=32768, per=28.55%, avg=32768.00, stdev= 0.00, samples=1 00:12:16.880 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=1 00:12:16.880 lat (usec) : 500=0.01% 00:12:16.880 lat (msec) : 2=0.07%, 4=0.26%, 10=99.66%, 20=0.01% 00:12:16.880 cpu : usr=1.80%, sys=6.09%, ctx=1393, majf=0, minf=1 00:12:16.880 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:12:16.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.880 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:16.880 issued rwts: total=8704,8805,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:16.880 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:16.880 job2: (groupid=0, jobs=1): err= 0: pid=691014: Thu Jul 25 19:03:09 2024 00:12:16.880 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:12:16.880 slat (nsec): min=1593, max=2468.8k, avg=95296.24, stdev=347595.45 00:12:16.880 clat (usec): min=8572, max=16959, avg=12402.79, stdev=3457.30 00:12:16.880 lat (usec): min=8575, max=16964, avg=12498.09, stdev=3476.70 00:12:16.880 clat percentiles (usec): 00:12:16.880 | 1.00th=[ 8848], 5.00th=[ 8979], 10.00th=[ 9110], 20.00th=[ 9241], 00:12:16.880 | 30.00th=[ 9372], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[15664], 00:12:16.880 | 70.00th=[16188], 80.00th=[16450], 90.00th=[16712], 95.00th=[16712], 00:12:16.880 | 99.00th=[16909], 99.50th=[16909], 99.90th=[16909], 99.95th=[16909], 00:12:16.880 | 99.99th=[16909] 00:12:16.880 write: IOPS=5414, BW=21.2MiB/s (22.2MB/s)(21.2MiB/1003msec); 0 zone resets 00:12:16.880 slat (usec): min=2, max=2393, avg=91.95, stdev=330.59 00:12:16.880 clat (usec): min=1057, max=16423, avg=11675.93, stdev=3333.15 00:12:16.880 lat (usec): min=3040, max=16426, avg=11767.88, stdev=3350.37 00:12:16.880 clat percentiles (usec): 00:12:16.880 | 1.00th=[ 8455], 5.00th=[ 8586], 10.00th=[ 8717], 20.00th=[ 8848], 00:12:16.880 | 30.00th=[ 8848], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[14484], 00:12:16.880 | 70.00th=[15401], 80.00th=[15664], 90.00th=[15795], 95.00th=[16057], 00:12:16.880 | 99.00th=[16319], 99.50th=[16319], 99.90th=[16450], 99.95th=[16450], 00:12:16.880 | 99.99th=[16450] 00:12:16.880 bw ( KiB/s): min=15448, max=26976, per=18.48%, avg=21212.00, stdev=8151.53, samples=2 00:12:16.880 iops : min= 3862, max= 6744, avg=5303.00, stdev=2037.88, samples=2 00:12:16.880 lat (msec) : 2=0.01%, 4=0.03%, 10=54.49%, 20=45.47% 00:12:16.880 cpu : usr=1.40%, sys=4.19%, ctx=1270, majf=0, minf=1 00:12:16.880 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:16.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.880 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:16.880 issued rwts: total=5120,5431,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:16.880 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:16.880 job3: (groupid=0, jobs=1): err= 0: pid=691019: Thu Jul 25 19:03:09 2024 00:12:16.880 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:12:16.880 slat (nsec): min=1513, max=2594.1k, avg=97297.45, stdev=335296.19 00:12:16.880 clat (usec): min=7779, max=16965, avg=12649.55, stdev=3320.06 00:12:16.880 lat (usec): min=8148, max=16967, avg=12746.85, stdev=3331.63 00:12:16.880 clat percentiles (usec): 00:12:16.880 | 1.00th=[ 8455], 5.00th=[ 9372], 10.00th=[ 9503], 20.00th=[ 9634], 00:12:16.880 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[15533], 00:12:16.880 | 70.00th=[16188], 80.00th=[16450], 90.00th=[16712], 95.00th=[16712], 00:12:16.880 | 99.00th=[16909], 99.50th=[16909], 99.90th=[16909], 99.95th=[16909], 00:12:16.880 | 99.99th=[16909] 00:12:16.880 write: IOPS=5308, BW=20.7MiB/s (21.7MB/s)(20.8MiB/1003msec); 0 zone resets 00:12:16.880 slat (usec): min=2, max=1701, avg=91.58, stdev=312.22 00:12:16.880 clat (usec): min=1820, max=16944, avg=11704.68, stdev=3265.57 00:12:16.880 lat (usec): min=3076, max=16947, avg=11796.26, stdev=3277.50 00:12:16.880 clat percentiles (usec): 00:12:16.880 | 1.00th=[ 6915], 5.00th=[ 8291], 10.00th=[ 8979], 20.00th=[ 9110], 00:12:16.880 | 30.00th=[ 9241], 40.00th=[ 9241], 50.00th=[ 9372], 60.00th=[14091], 00:12:16.880 | 70.00th=[15401], 80.00th=[15664], 90.00th=[15926], 95.00th=[16057], 00:12:16.880 | 99.00th=[16450], 99.50th=[16581], 99.90th=[16909], 99.95th=[16909], 00:12:16.880 | 99.99th=[16909] 00:12:16.880 bw ( KiB/s): min=15464, max=26112, per=18.11%, avg=20788.00, stdev=7529.27, samples=2 00:12:16.880 iops : min= 3866, max= 6528, avg=5197.00, stdev=1882.32, samples=2 00:12:16.880 lat (msec) : 2=0.01%, 4=0.09%, 10=54.76%, 20=45.15% 00:12:16.880 cpu : usr=1.50%, sys=4.29%, ctx=1391, majf=0, minf=1 00:12:16.880 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:16.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.880 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:16.880 issued rwts: total=5120,5324,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:16.880 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:16.880 00:12:16.880 Run status group 0 (all jobs): 00:12:16.880 READ: bw=110MiB/s (115MB/s), 19.9MiB/s-35.8MiB/s (20.9MB/s-37.5MB/s), io=110MiB (115MB), run=1002-1003msec 00:12:16.880 WRITE: bw=112MiB/s (118MB/s), 20.7MiB/s-35.9MiB/s (21.7MB/s-37.6MB/s), io=112MiB (118MB), run=1002-1003msec 00:12:16.880 00:12:16.880 Disk stats (read/write): 00:12:16.880 nvme0n1: ios=7693/7680, merge=0/0, ticks=27052/25155, in_queue=52207, util=85.17% 00:12:16.880 nvme0n2: ios=7168/7330, merge=0/0, ticks=13513/12869, in_queue=26382, util=86.83% 00:12:16.880 nvme0n3: ios=4608/4630, merge=0/0, ticks=13665/12803, in_queue=26468, util=89.03% 00:12:16.880 nvme0n4: ios=4543/4608, merge=0/0, ticks=20294/19567, in_queue=39861, util=89.60% 00:12:16.880 19:03:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:16.880 [global] 00:12:16.880 thread=1 00:12:16.880 invalidate=1 00:12:16.880 rw=randwrite 00:12:16.880 time_based=1 00:12:16.880 runtime=1 00:12:16.880 ioengine=libaio 00:12:16.880 direct=1 00:12:16.880 bs=4096 00:12:16.880 iodepth=128 00:12:16.880 norandommap=0 00:12:16.880 numjobs=1 00:12:16.880 00:12:16.880 verify_dump=1 00:12:16.880 verify_backlog=512 00:12:16.880 verify_state_save=0 00:12:16.880 do_verify=1 00:12:16.880 verify=crc32c-intel 00:12:16.880 [job0] 00:12:16.880 filename=/dev/nvme0n1 00:12:16.880 [job1] 00:12:16.880 filename=/dev/nvme0n2 00:12:16.881 [job2] 00:12:16.881 filename=/dev/nvme0n3 00:12:16.881 [job3] 00:12:16.881 filename=/dev/nvme0n4 00:12:16.881 Could not set queue depth (nvme0n1) 00:12:16.881 Could not set queue depth (nvme0n2) 00:12:16.881 Could not set queue depth (nvme0n3) 00:12:16.881 Could not set queue depth (nvme0n4) 00:12:17.143 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:17.143 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:17.143 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:17.143 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:17.143 fio-3.35 00:12:17.143 Starting 4 threads 00:12:18.514 00:12:18.514 job0: (groupid=0, jobs=1): err= 0: pid=691718: Thu Jul 25 19:03:10 2024 00:12:18.514 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:12:18.514 slat (nsec): min=1357, max=4267.0k, avg=106448.34, stdev=401695.69 00:12:18.514 clat (usec): min=8354, max=20446, avg=13578.97, stdev=2931.34 00:12:18.514 lat (usec): min=10288, max=20450, avg=13685.42, stdev=2928.74 00:12:18.514 clat percentiles (usec): 00:12:18.514 | 1.00th=[ 8979], 5.00th=[10683], 10.00th=[10945], 20.00th=[11076], 00:12:18.514 | 30.00th=[11207], 40.00th=[11338], 50.00th=[12125], 60.00th=[13698], 00:12:18.514 | 70.00th=[15795], 80.00th=[17171], 90.00th=[18220], 95.00th=[18482], 00:12:18.514 | 99.00th=[18744], 99.50th=[19268], 99.90th=[20317], 99.95th=[20317], 00:12:18.514 | 99.99th=[20317] 00:12:18.514 write: IOPS=5063, BW=19.8MiB/s (20.7MB/s)(19.8MiB/1003msec); 0 zone resets 00:12:18.514 slat (nsec): min=1979, max=3684.0k, avg=98176.81, stdev=370287.94 00:12:18.514 clat (usec): min=2054, max=18209, avg=12625.20, stdev=2856.88 00:12:18.514 lat (usec): min=2529, max=18405, avg=12723.37, stdev=2853.51 00:12:18.514 clat percentiles (usec): 00:12:18.514 | 1.00th=[ 7242], 5.00th=[10028], 10.00th=[10290], 20.00th=[10421], 00:12:18.514 | 30.00th=[10552], 40.00th=[10683], 50.00th=[11600], 60.00th=[12911], 00:12:18.514 | 70.00th=[13304], 80.00th=[16188], 90.00th=[17171], 95.00th=[17433], 00:12:18.514 | 99.00th=[17957], 99.50th=[18220], 99.90th=[18220], 99.95th=[18220], 00:12:18.514 | 99.99th=[18220] 00:12:18.514 bw ( KiB/s): min=16384, max=23232, per=22.61%, avg=19808.00, stdev=4842.27, samples=2 00:12:18.514 iops : min= 4096, max= 5808, avg=4952.00, stdev=1210.57, samples=2 00:12:18.514 lat (msec) : 4=0.17%, 10=2.75%, 20=97.03%, 50=0.06% 00:12:18.514 cpu : usr=1.70%, sys=2.69%, ctx=2648, majf=0, minf=1 00:12:18.514 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:12:18.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:18.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:18.514 issued rwts: total=4608,5079,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:18.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:18.514 job1: (groupid=0, jobs=1): err= 0: pid=691723: Thu Jul 25 19:03:10 2024 00:12:18.514 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:12:18.514 slat (nsec): min=1515, max=4101.4k, avg=105989.70, stdev=292963.32 00:12:18.514 clat (usec): min=9495, max=21321, avg=13485.54, stdev=2884.65 00:12:18.514 lat (usec): min=9584, max=21422, avg=13591.53, stdev=2893.82 00:12:18.514 clat percentiles (usec): 00:12:18.514 | 1.00th=[10159], 5.00th=[10552], 10.00th=[10683], 20.00th=[11076], 00:12:18.514 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11600], 60.00th=[13698], 00:12:18.514 | 70.00th=[15139], 80.00th=[16909], 90.00th=[18220], 95.00th=[18482], 00:12:18.514 | 99.00th=[18744], 99.50th=[19268], 99.90th=[20579], 99.95th=[20579], 00:12:18.514 | 99.99th=[21365] 00:12:18.514 write: IOPS=5058, BW=19.8MiB/s (20.7MB/s)(19.8MiB/1003msec); 0 zone resets 00:12:18.514 slat (nsec): min=1998, max=2291.9k, avg=98776.40, stdev=268334.17 00:12:18.514 clat (usec): min=1422, max=18744, avg=12724.10, stdev=2901.19 00:12:18.514 lat (usec): min=2074, max=18747, avg=12822.87, stdev=2910.01 00:12:18.514 clat percentiles (usec): 00:12:18.514 | 1.00th=[ 5866], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10421], 00:12:18.514 | 30.00th=[10421], 40.00th=[10683], 50.00th=[11863], 60.00th=[12911], 00:12:18.514 | 70.00th=[14877], 80.00th=[16188], 90.00th=[17171], 95.00th=[17433], 00:12:18.514 | 99.00th=[17957], 99.50th=[18220], 99.90th=[18482], 99.95th=[18482], 00:12:18.514 | 99.99th=[18744] 00:12:18.514 bw ( KiB/s): min=16384, max=23192, per=22.59%, avg=19788.00, stdev=4813.98, samples=2 00:12:18.514 iops : min= 4096, max= 5798, avg=4947.00, stdev=1203.50, samples=2 00:12:18.514 lat (msec) : 2=0.01%, 4=0.25%, 10=3.29%, 20=96.39%, 50=0.06% 00:12:18.514 cpu : usr=1.50%, sys=3.19%, ctx=2702, majf=0, minf=1 00:12:18.514 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:12:18.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:18.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:18.515 issued rwts: total=4608,5074,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:18.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:18.515 job2: (groupid=0, jobs=1): err= 0: pid=691751: Thu Jul 25 19:03:10 2024 00:12:18.515 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:12:18.515 slat (nsec): min=1363, max=3172.8k, avg=89944.09, stdev=285493.35 00:12:18.515 clat (usec): min=5858, max=20134, avg=11519.56, stdev=3883.03 00:12:18.515 lat (usec): min=5860, max=20161, avg=11609.50, stdev=3902.98 00:12:18.515 clat percentiles (usec): 00:12:18.515 | 1.00th=[ 6128], 5.00th=[ 6521], 10.00th=[ 6980], 20.00th=[ 7439], 00:12:18.515 | 30.00th=[ 8160], 40.00th=[ 8455], 50.00th=[12911], 60.00th=[13304], 00:12:18.515 | 70.00th=[13435], 80.00th=[13960], 90.00th=[17695], 95.00th=[18220], 00:12:18.515 | 99.00th=[18482], 99.50th=[19006], 99.90th=[19530], 99.95th=[19792], 00:12:18.515 | 99.99th=[20055] 00:12:18.515 write: IOPS=5651, BW=22.1MiB/s (23.1MB/s)(22.1MiB/1003msec); 0 zone resets 00:12:18.515 slat (nsec): min=1923, max=3100.9k, avg=84919.05, stdev=278260.14 00:12:18.515 clat (usec): min=1925, max=17994, avg=10942.62, stdev=3731.64 00:12:18.515 lat (usec): min=2761, max=17997, avg=11027.54, stdev=3751.69 00:12:18.515 clat percentiles (usec): 00:12:18.515 | 1.00th=[ 5800], 5.00th=[ 6128], 10.00th=[ 6652], 20.00th=[ 7177], 00:12:18.515 | 30.00th=[ 7963], 40.00th=[ 8225], 50.00th=[12125], 60.00th=[12911], 00:12:18.515 | 70.00th=[13173], 80.00th=[13304], 90.00th=[16712], 95.00th=[17171], 00:12:18.515 | 99.00th=[17695], 99.50th=[17695], 99.90th=[17695], 99.95th=[17695], 00:12:18.515 | 99.99th=[17957] 00:12:18.515 bw ( KiB/s): min=21440, max=23616, per=25.72%, avg=22528.00, stdev=1538.66, samples=2 00:12:18.515 iops : min= 5360, max= 5904, avg=5632.00, stdev=384.67, samples=2 00:12:18.515 lat (msec) : 2=0.01%, 4=0.14%, 10=46.08%, 20=53.75%, 50=0.02% 00:12:18.515 cpu : usr=1.90%, sys=2.40%, ctx=1769, majf=0, minf=2 00:12:18.515 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:12:18.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:18.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:18.515 issued rwts: total=5632,5668,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:18.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:18.515 job3: (groupid=0, jobs=1): err= 0: pid=691764: Thu Jul 25 19:03:10 2024 00:12:18.515 read: IOPS=5876, BW=23.0MiB/s (24.1MB/s)(23.0MiB/1003msec) 00:12:18.515 slat (nsec): min=1384, max=3027.5k, avg=81810.95, stdev=261366.20 00:12:18.515 clat (usec): min=1948, max=19402, avg=10382.18, stdev=3533.85 00:12:18.515 lat (usec): min=2787, max=19405, avg=10464.00, stdev=3554.75 00:12:18.515 clat percentiles (usec): 00:12:18.515 | 1.00th=[ 5866], 5.00th=[ 6390], 10.00th=[ 6521], 20.00th=[ 6783], 00:12:18.515 | 30.00th=[ 7701], 40.00th=[ 8029], 50.00th=[ 8848], 60.00th=[12649], 00:12:18.515 | 70.00th=[13173], 80.00th=[13435], 90.00th=[14091], 95.00th=[16909], 00:12:18.515 | 99.00th=[18744], 99.50th=[18744], 99.90th=[19268], 99.95th=[19530], 00:12:18.515 | 99.99th=[19530] 00:12:18.515 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:12:18.515 slat (usec): min=2, max=2616, avg=82.18, stdev=258.73 00:12:18.515 clat (usec): min=6004, max=18773, avg=10695.34, stdev=3774.05 00:12:18.515 lat (usec): min=6007, max=18776, avg=10777.52, stdev=3798.64 00:12:18.515 clat percentiles (usec): 00:12:18.515 | 1.00th=[ 6063], 5.00th=[ 6063], 10.00th=[ 6194], 20.00th=[ 6718], 00:12:18.515 | 30.00th=[ 7635], 40.00th=[ 7963], 50.00th=[ 9241], 60.00th=[12911], 00:12:18.515 | 70.00th=[13173], 80.00th=[13435], 90.00th=[16188], 95.00th=[16909], 00:12:18.515 | 99.00th=[18482], 99.50th=[18482], 99.90th=[18744], 99.95th=[18744], 00:12:18.515 | 99.99th=[18744] 00:12:18.515 bw ( KiB/s): min=19600, max=29552, per=28.06%, avg=24576.00, stdev=7037.13, samples=2 00:12:18.515 iops : min= 4900, max= 7388, avg=6144.00, stdev=1759.28, samples=2 00:12:18.515 lat (msec) : 2=0.01%, 4=0.14%, 10=52.13%, 20=47.72% 00:12:18.515 cpu : usr=0.80%, sys=4.09%, ctx=1814, majf=0, minf=1 00:12:18.515 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:12:18.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:18.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:18.515 issued rwts: total=5894,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:18.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:18.515 00:12:18.515 Run status group 0 (all jobs): 00:12:18.515 READ: bw=80.8MiB/s (84.7MB/s), 17.9MiB/s-23.0MiB/s (18.8MB/s-24.1MB/s), io=81.0MiB (85.0MB), run=1003-1003msec 00:12:18.515 WRITE: bw=85.5MiB/s (89.7MB/s), 19.8MiB/s-23.9MiB/s (20.7MB/s-25.1MB/s), io=85.8MiB (90.0MB), run=1003-1003msec 00:12:18.515 00:12:18.515 Disk stats (read/write): 00:12:18.515 nvme0n1: ios=3897/4096, merge=0/0, ticks=13844/13519, in_queue=27363, util=86.47% 00:12:18.515 nvme0n2: ios=3833/4096, merge=0/0, ticks=13697/13611, in_queue=27308, util=87.11% 00:12:18.515 nvme0n3: ios=4758/5120, merge=0/0, ticks=13408/13606, in_queue=27014, util=89.10% 00:12:18.515 nvme0n4: ios=5120/5500, merge=0/0, ticks=13129/14240, in_queue=27369, util=89.64% 00:12:18.515 19:03:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:18.515 19:03:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=691961 00:12:18.515 19:03:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:18.515 19:03:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:18.515 [global] 00:12:18.515 thread=1 00:12:18.515 invalidate=1 00:12:18.515 rw=read 00:12:18.515 time_based=1 00:12:18.515 runtime=10 00:12:18.515 ioengine=libaio 00:12:18.515 direct=1 00:12:18.515 bs=4096 00:12:18.515 iodepth=1 00:12:18.515 norandommap=1 00:12:18.515 numjobs=1 00:12:18.515 00:12:18.515 [job0] 00:12:18.515 filename=/dev/nvme0n1 00:12:18.515 [job1] 00:12:18.515 filename=/dev/nvme0n2 00:12:18.515 [job2] 00:12:18.515 filename=/dev/nvme0n3 00:12:18.515 [job3] 00:12:18.515 filename=/dev/nvme0n4 00:12:18.515 Could not set queue depth (nvme0n1) 00:12:18.515 Could not set queue depth (nvme0n2) 00:12:18.515 Could not set queue depth (nvme0n3) 00:12:18.515 Could not set queue depth (nvme0n4) 00:12:18.515 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:18.515 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:18.515 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:18.515 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:18.515 fio-3.35 00:12:18.515 Starting 4 threads 00:12:21.789 19:03:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:21.789 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=77881344, buflen=4096 00:12:21.789 fio: pid=692316, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:21.789 19:03:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:21.789 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=84217856, buflen=4096 00:12:21.789 fio: pid=692315, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:21.789 19:03:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:21.789 19:03:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:21.789 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=28868608, buflen=4096 00:12:21.789 fio: pid=692299, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:21.789 19:03:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:21.789 19:03:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:22.047 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=45457408, buflen=4096 00:12:22.047 fio: pid=692314, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:22.047 19:03:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:22.047 19:03:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:22.047 00:12:22.047 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=692299: Thu Jul 25 19:03:14 2024 00:12:22.047 read: IOPS=7549, BW=29.5MiB/s (30.9MB/s)(91.5MiB/3104msec) 00:12:22.047 slat (usec): min=6, max=19661, avg= 9.65, stdev=185.19 00:12:22.047 clat (usec): min=49, max=21907, avg=120.55, stdev=202.47 00:12:22.047 lat (usec): min=56, max=21915, avg=130.21, stdev=276.84 00:12:22.047 clat percentiles (usec): 00:12:22.047 | 1.00th=[ 64], 5.00th=[ 75], 10.00th=[ 79], 20.00th=[ 87], 00:12:22.047 | 30.00th=[ 114], 40.00th=[ 120], 50.00th=[ 124], 60.00th=[ 128], 00:12:22.047 | 70.00th=[ 130], 80.00th=[ 135], 90.00th=[ 151], 95.00th=[ 167], 00:12:22.047 | 99.00th=[ 180], 99.50th=[ 204], 99.90th=[ 233], 99.95th=[ 239], 00:12:22.047 | 99.99th=[ 1074] 00:12:22.047 bw ( KiB/s): min=27464, max=33392, per=27.97%, avg=30394.33, stdev=2325.73, samples=6 00:12:22.047 iops : min= 6866, max= 8348, avg=7598.50, stdev=581.35, samples=6 00:12:22.047 lat (usec) : 50=0.01%, 100=25.22%, 250=74.73%, 500=0.01%, 1000=0.01% 00:12:22.047 lat (msec) : 2=0.01%, 50=0.01% 00:12:22.047 cpu : usr=2.51%, sys=7.93%, ctx=23439, majf=0, minf=1 00:12:22.047 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:22.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:22.047 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:22.047 issued rwts: total=23433,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:22.047 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:22.047 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=692314: Thu Jul 25 19:03:14 2024 00:12:22.047 read: IOPS=8250, BW=32.2MiB/s (33.8MB/s)(107MiB/3331msec) 00:12:22.047 slat (usec): min=6, max=15904, avg=10.47, stdev=210.10 00:12:22.047 clat (usec): min=35, max=22040, avg=108.95, stdev=188.05 00:12:22.047 lat (usec): min=55, max=22048, avg=119.42, stdev=281.67 00:12:22.047 clat percentiles (usec): 00:12:22.047 | 1.00th=[ 54], 5.00th=[ 59], 10.00th=[ 70], 20.00th=[ 77], 00:12:22.047 | 30.00th=[ 83], 40.00th=[ 110], 50.00th=[ 119], 60.00th=[ 124], 00:12:22.047 | 70.00th=[ 127], 80.00th=[ 131], 90.00th=[ 137], 95.00th=[ 147], 00:12:22.047 | 99.00th=[ 169], 99.50th=[ 174], 99.90th=[ 184], 99.95th=[ 186], 00:12:22.047 | 99.99th=[ 1074] 00:12:22.047 bw ( KiB/s): min=29232, max=36192, per=29.66%, avg=32227.67, stdev=2653.88, samples=6 00:12:22.047 iops : min= 7308, max= 9048, avg=8056.83, stdev=663.43, samples=6 00:12:22.047 lat (usec) : 50=0.02%, 100=38.36%, 250=61.59%, 500=0.01% 00:12:22.047 lat (msec) : 2=0.01%, 50=0.01% 00:12:22.047 cpu : usr=2.10%, sys=10.21%, ctx=27490, majf=0, minf=1 00:12:22.047 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:22.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:22.047 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:22.047 issued rwts: total=27483,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:22.047 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:22.047 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=692315: Thu Jul 25 19:03:14 2024 00:12:22.047 read: IOPS=7092, BW=27.7MiB/s (29.1MB/s)(80.3MiB/2899msec) 00:12:22.047 slat (usec): min=6, max=15916, avg= 9.06, stdev=156.32 00:12:22.047 clat (usec): min=58, max=1104, avg=129.91, stdev=24.93 00:12:22.047 lat (usec): min=65, max=16013, avg=138.97, stdev=157.90 00:12:22.047 clat percentiles (usec): 00:12:22.047 | 1.00th=[ 81], 5.00th=[ 88], 10.00th=[ 93], 20.00th=[ 123], 00:12:22.047 | 30.00th=[ 127], 40.00th=[ 129], 50.00th=[ 131], 60.00th=[ 133], 00:12:22.047 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 163], 95.00th=[ 172], 00:12:22.047 | 99.00th=[ 184], 99.50th=[ 192], 99.90th=[ 223], 99.95th=[ 229], 00:12:22.047 | 99.99th=[ 1029] 00:12:22.047 bw ( KiB/s): min=27600, max=28936, per=26.01%, avg=28268.80, stdev=638.06, samples=5 00:12:22.047 iops : min= 6900, max= 7234, avg=7067.20, stdev=159.52, samples=5 00:12:22.047 lat (usec) : 100=13.89%, 250=86.09% 00:12:22.047 lat (msec) : 2=0.01% 00:12:22.047 cpu : usr=2.00%, sys=8.76%, ctx=20564, majf=0, minf=1 00:12:22.047 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:22.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:22.047 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:22.047 issued rwts: total=20562,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:22.047 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:22.047 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=692316: Thu Jul 25 19:03:14 2024 00:12:22.047 read: IOPS=7045, BW=27.5MiB/s (28.9MB/s)(74.3MiB/2699msec) 00:12:22.047 slat (nsec): min=6531, max=42739, avg=8441.26, stdev=2100.12 00:12:22.047 clat (usec): min=70, max=245, avg=131.05, stdev=19.06 00:12:22.047 lat (usec): min=77, max=252, avg=139.49, stdev=18.93 00:12:22.047 clat percentiles (usec): 00:12:22.048 | 1.00th=[ 88], 5.00th=[ 94], 10.00th=[ 117], 20.00th=[ 123], 00:12:22.048 | 30.00th=[ 126], 40.00th=[ 127], 50.00th=[ 129], 60.00th=[ 131], 00:12:22.048 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 161], 95.00th=[ 172], 00:12:22.048 | 99.00th=[ 186], 99.50th=[ 198], 99.90th=[ 223], 99.95th=[ 225], 00:12:22.048 | 99.99th=[ 245] 00:12:22.048 bw ( KiB/s): min=27608, max=29216, per=26.14%, avg=28400.00, stdev=753.89, samples=5 00:12:22.048 iops : min= 6902, max= 7304, avg=7100.00, stdev=188.47, samples=5 00:12:22.048 lat (usec) : 100=7.14%, 250=92.86% 00:12:22.048 cpu : usr=2.52%, sys=8.75%, ctx=19016, majf=0, minf=2 00:12:22.048 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:22.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:22.048 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:22.048 issued rwts: total=19015,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:22.048 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:22.048 00:12:22.048 Run status group 0 (all jobs): 00:12:22.048 READ: bw=106MiB/s (111MB/s), 27.5MiB/s-32.2MiB/s (28.9MB/s-33.8MB/s), io=353MiB (371MB), run=2699-3331msec 00:12:22.048 00:12:22.048 Disk stats (read/write): 00:12:22.048 nvme0n1: ios=23432/0, merge=0/0, ticks=2640/0, in_queue=2640, util=93.44% 00:12:22.048 nvme0n2: ios=24830/0, merge=0/0, ticks=2692/0, in_queue=2692, util=93.68% 00:12:22.048 nvme0n3: ios=20178/0, merge=0/0, ticks=2495/0, in_queue=2495, util=95.31% 00:12:22.048 nvme0n4: ios=18326/0, merge=0/0, ticks=2260/0, in_queue=2260, util=96.38% 00:12:22.305 19:03:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:22.305 19:03:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:22.562 19:03:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:22.562 19:03:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:22.820 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:22.820 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:23.077 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:23.077 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:23.334 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:23.334 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 691961 00:12:23.334 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:23.334 19:03:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:25.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.856 19:03:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:25.856 19:03:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:12:25.856 19:03:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:25.856 19:03:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.856 19:03:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:25.856 19:03:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.856 19:03:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:12:25.856 19:03:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:25.856 19:03:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:25.856 nvmf hotplug test: fio failed as expected 00:12:25.856 19:03:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:25.856 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:25.856 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:25.856 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:25.856 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:25.856 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:25.856 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:25.856 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:12:25.856 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:25.856 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:25.856 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:12:25.856 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:25.856 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:25.856 rmmod nvme_rdma 00:12:25.856 rmmod nvme_fabrics 00:12:25.856 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:25.856 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:12:25.856 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:12:25.856 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 688225 ']' 00:12:25.856 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 688225 00:12:25.856 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 688225 ']' 00:12:25.856 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 688225 00:12:25.856 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:12:25.856 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:25.856 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 688225 00:12:25.856 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:25.856 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:25.856 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 688225' 00:12:25.856 killing process with pid 688225 00:12:25.856 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 688225 00:12:25.856 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 688225 00:12:26.114 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:26.114 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:26.114 00:12:26.114 real 0m29.587s 00:12:26.114 user 2m6.940s 00:12:26.114 sys 0m8.727s 00:12:26.114 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:26.114 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.114 ************************************ 00:12:26.114 END TEST nvmf_fio_target 00:12:26.114 ************************************ 00:12:26.114 19:03:18 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:12:26.114 19:03:18 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:26.114 19:03:18 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:26.114 19:03:18 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:26.114 ************************************ 00:12:26.114 START TEST nvmf_bdevio 00:12:26.114 ************************************ 00:12:26.114 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:12:26.373 * Looking for test storage... 00:12:26.373 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:26.373 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:26.373 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:26.373 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:26.373 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:26.373 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:26.373 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:26.373 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:26.373 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:26.373 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:26.373 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:26.373 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:26.373 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:26.373 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:12:26.373 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:12:26.373 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:26.373 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:26.373 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:26.373 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:26.373 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:26.373 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:26.373 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:26.373 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:26.373 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.373 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.373 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.373 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:26.374 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.374 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:12:26.374 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:26.374 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:26.374 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:26.374 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:26.374 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:26.374 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:26.374 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:26.374 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:26.374 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:26.374 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:26.374 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:26.374 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:26.374 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:26.374 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:26.374 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:26.374 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:26.374 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.374 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:26.374 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.374 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:26.374 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:26.374 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:12:26.374 19:03:18 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:12:32.946 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:12:32.946 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:12:32.946 Found net devices under 0000:af:00.0: mlx_0_0 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:12:32.946 Found net devices under 0000:af:00.1: mlx_0_1 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # rdma_device_init 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # uname 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:32.946 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:32.947 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:32.947 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:12:32.947 altname enp175s0f0np0 00:12:32.947 altname ens801f0np0 00:12:32.947 inet 192.168.100.8/24 scope global mlx_0_0 00:12:32.947 valid_lft forever preferred_lft forever 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:32.947 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:32.947 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:12:32.947 altname enp175s0f1np1 00:12:32.947 altname ens801f1np1 00:12:32.947 inet 192.168.100.9/24 scope global mlx_0_1 00:12:32.947 valid_lft forever preferred_lft forever 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:32.947 192.168.100.9' 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:32.947 192.168.100.9' 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # head -n 1 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:32.947 192.168.100.9' 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@458 -- # tail -n +2 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@458 -- # head -n 1 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=696619 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 696619 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 696619 ']' 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:32.947 19:03:24 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:32.948 [2024-07-25 19:03:24.568632] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:32.948 [2024-07-25 19:03:24.568678] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.948 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.948 [2024-07-25 19:03:24.637674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:32.948 [2024-07-25 19:03:24.715791] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.948 [2024-07-25 19:03:24.715828] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.948 [2024-07-25 19:03:24.715835] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.948 [2024-07-25 19:03:24.715845] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.948 [2024-07-25 19:03:24.715850] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.948 [2024-07-25 19:03:24.715962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:32.948 [2024-07-25 19:03:24.716089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:32.948 [2024-07-25 19:03:24.716195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.948 [2024-07-25 19:03:24.716197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:32.948 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:32.948 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:12:32.948 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:32.948 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:32.948 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:33.205 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.205 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:33.205 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.205 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:33.205 [2024-07-25 19:03:25.476203] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16366e0/0x163abd0) succeed. 00:12:33.205 [2024-07-25 19:03:25.485831] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1637d20/0x167c270) succeed. 00:12:33.205 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.205 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:33.205 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.205 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:33.205 Malloc0 00:12:33.205 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.205 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:33.205 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.205 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:33.205 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.205 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:33.205 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.205 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:33.205 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.205 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:33.205 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.205 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:33.205 [2024-07-25 19:03:25.653145] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:33.205 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.205 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:33.205 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:33.205 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:12:33.205 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:12:33.205 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:33.205 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:33.205 { 00:12:33.205 "params": { 00:12:33.205 "name": "Nvme$subsystem", 00:12:33.205 "trtype": "$TEST_TRANSPORT", 00:12:33.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:33.205 "adrfam": "ipv4", 00:12:33.205 "trsvcid": "$NVMF_PORT", 00:12:33.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:33.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:33.205 "hdgst": ${hdgst:-false}, 00:12:33.205 "ddgst": ${ddgst:-false} 00:12:33.205 }, 00:12:33.205 "method": "bdev_nvme_attach_controller" 00:12:33.205 } 00:12:33.205 EOF 00:12:33.205 )") 00:12:33.205 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:12:33.205 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:12:33.205 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:12:33.205 19:03:25 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:33.205 "params": { 00:12:33.205 "name": "Nvme1", 00:12:33.205 "trtype": "rdma", 00:12:33.205 "traddr": "192.168.100.8", 00:12:33.205 "adrfam": "ipv4", 00:12:33.205 "trsvcid": "4420", 00:12:33.205 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:33.205 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:33.205 "hdgst": false, 00:12:33.205 "ddgst": false 00:12:33.205 }, 00:12:33.205 "method": "bdev_nvme_attach_controller" 00:12:33.205 }' 00:12:33.462 [2024-07-25 19:03:25.698963] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:33.462 [2024-07-25 19:03:25.699006] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid696874 ] 00:12:33.462 EAL: No free 2048 kB hugepages reported on node 1 00:12:33.462 [2024-07-25 19:03:25.768946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:33.462 [2024-07-25 19:03:25.842645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.462 [2024-07-25 19:03:25.842753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.462 [2024-07-25 19:03:25.842753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.719 I/O targets: 00:12:33.719 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:33.719 00:12:33.719 00:12:33.719 CUnit - A unit testing framework for C - Version 2.1-3 00:12:33.719 http://cunit.sourceforge.net/ 00:12:33.719 00:12:33.719 00:12:33.719 Suite: bdevio tests on: Nvme1n1 00:12:33.719 Test: blockdev write read block ...passed 00:12:33.719 Test: blockdev write zeroes read block ...passed 00:12:33.719 Test: blockdev write zeroes read no split ...passed 00:12:33.719 Test: blockdev write zeroes read split ...passed 00:12:33.719 Test: blockdev write zeroes read split partial ...passed 00:12:33.719 Test: blockdev reset ...[2024-07-25 19:03:26.048031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:33.719 [2024-07-25 19:03:26.070922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:12:33.719 [2024-07-25 19:03:26.098151] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:33.719 passed 00:12:33.719 Test: blockdev write read 8 blocks ...passed 00:12:33.719 Test: blockdev write read size > 128k ...passed 00:12:33.719 Test: blockdev write read invalid size ...passed 00:12:33.719 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.720 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.720 Test: blockdev write read max offset ...passed 00:12:33.720 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.720 Test: blockdev writev readv 8 blocks ...passed 00:12:33.720 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.720 Test: blockdev writev readv block ...passed 00:12:33.720 Test: blockdev writev readv size > 128k ...passed 00:12:33.720 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.720 Test: blockdev comparev and writev ...[2024-07-25 19:03:26.101511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:33.720 [2024-07-25 19:03:26.101540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:33.720 [2024-07-25 19:03:26.101550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:33.720 [2024-07-25 19:03:26.101558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:33.720 [2024-07-25 19:03:26.101745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:33.720 [2024-07-25 19:03:26.101754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:33.720 [2024-07-25 19:03:26.101762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:33.720 [2024-07-25 19:03:26.101768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:33.720 [2024-07-25 19:03:26.101938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:33.720 [2024-07-25 19:03:26.101948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:33.720 [2024-07-25 19:03:26.101956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:33.720 [2024-07-25 19:03:26.101963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:33.720 [2024-07-25 19:03:26.102132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:33.720 [2024-07-25 19:03:26.102140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:33.720 [2024-07-25 19:03:26.102148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:33.720 [2024-07-25 19:03:26.102155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:33.720 passed 00:12:33.720 Test: blockdev nvme passthru rw ...passed 00:12:33.720 Test: blockdev nvme passthru vendor specific ...[2024-07-25 19:03:26.102445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:33.720 [2024-07-25 19:03:26.102455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:33.720 [2024-07-25 19:03:26.102497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:33.720 [2024-07-25 19:03:26.102505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:33.720 [2024-07-25 19:03:26.102551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:33.720 [2024-07-25 19:03:26.102558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:33.720 [2024-07-25 19:03:26.102600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:33.720 [2024-07-25 19:03:26.102608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:33.720 passed 00:12:33.720 Test: blockdev nvme admin passthru ...passed 00:12:33.720 Test: blockdev copy ...passed 00:12:33.720 00:12:33.720 Run Summary: Type Total Ran Passed Failed Inactive 00:12:33.720 suites 1 1 n/a 0 0 00:12:33.720 tests 23 23 23 0 0 00:12:33.720 asserts 152 152 152 0 n/a 00:12:33.720 00:12:33.720 Elapsed time = 0.170 seconds 00:12:33.977 19:03:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.977 19:03:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.977 19:03:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:33.977 19:03:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.977 19:03:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:33.977 19:03:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:33.977 19:03:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:33.977 19:03:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:12:33.978 19:03:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:33.978 19:03:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:33.978 19:03:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:12:33.978 19:03:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:33.978 19:03:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:33.978 rmmod nvme_rdma 00:12:33.978 rmmod nvme_fabrics 00:12:33.978 19:03:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:33.978 19:03:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:12:33.978 19:03:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:12:33.978 19:03:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 696619 ']' 00:12:33.978 19:03:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 696619 00:12:33.978 19:03:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 696619 ']' 00:12:33.978 19:03:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 696619 00:12:33.978 19:03:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:12:33.978 19:03:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:33.978 19:03:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 696619 00:12:33.978 19:03:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:12:33.978 19:03:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:12:33.978 19:03:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 696619' 00:12:33.978 killing process with pid 696619 00:12:33.978 19:03:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 696619 00:12:33.978 19:03:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 696619 00:12:34.236 19:03:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:34.236 19:03:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:34.236 00:12:34.236 real 0m8.153s 00:12:34.236 user 0m10.659s 00:12:34.236 sys 0m4.962s 00:12:34.236 19:03:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:34.236 19:03:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:34.236 ************************************ 00:12:34.236 END TEST nvmf_bdevio 00:12:34.236 ************************************ 00:12:34.495 19:03:26 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:34.495 00:12:34.495 real 4m13.366s 00:12:34.495 user 11m33.494s 00:12:34.495 sys 1m21.091s 00:12:34.495 19:03:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:34.495 19:03:26 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:34.495 ************************************ 00:12:34.495 END TEST nvmf_target_core 00:12:34.495 ************************************ 00:12:34.495 19:03:26 nvmf_rdma -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:12:34.495 19:03:26 nvmf_rdma -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:34.495 19:03:26 nvmf_rdma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:34.495 19:03:26 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:34.495 ************************************ 00:12:34.495 START TEST nvmf_target_extra 00:12:34.495 ************************************ 00:12:34.495 19:03:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:12:34.495 * Looking for test storage... 00:12:34.495 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:12:34.495 19:03:26 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:34.495 19:03:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:34.495 19:03:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.495 19:03:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.495 19:03:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.495 19:03:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.495 19:03:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.495 19:03:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.495 19:03:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.495 19:03:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.495 19:03:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.495 19:03:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.495 19:03:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:12:34.495 19:03:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:12:34.495 19:03:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.495 19:03:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.495 19:03:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:34.495 19:03:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:34.495 19:03:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:34.495 19:03:26 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.495 19:03:26 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.495 19:03:26 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.495 19:03:26 nvmf_rdma.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.495 19:03:26 nvmf_rdma.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.495 19:03:26 nvmf_rdma.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.495 19:03:26 nvmf_rdma.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:34.496 19:03:26 nvmf_rdma.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.496 19:03:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:12:34.496 19:03:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:34.496 19:03:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:34.496 19:03:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:34.496 19:03:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.496 19:03:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.496 19:03:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:34.496 19:03:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:34.496 19:03:26 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:34.496 19:03:26 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:34.496 19:03:26 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:34.496 19:03:26 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:34.496 19:03:26 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:12:34.496 19:03:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:34.496 19:03:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:34.496 19:03:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:34.496 ************************************ 00:12:34.496 START TEST nvmf_example 00:12:34.496 ************************************ 00:12:34.496 19:03:26 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:12:34.755 * Looking for test storage... 00:12:34.755 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:12:34.755 19:03:27 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:12:41.324 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:12:41.324 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:12:41.324 Found net devices under 0000:af:00.0: mlx_0_0 00:12:41.324 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:12:41.325 Found net devices under 0000:af:00.1: mlx_0_1 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # rdma_device_init 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # uname 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:41.325 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:41.325 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:12:41.325 altname enp175s0f0np0 00:12:41.325 altname ens801f0np0 00:12:41.325 inet 192.168.100.8/24 scope global mlx_0_0 00:12:41.325 valid_lft forever preferred_lft forever 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:41.325 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:41.325 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:12:41.325 altname enp175s0f1np1 00:12:41.325 altname ens801f1np1 00:12:41.325 inet 192.168.100.9/24 scope global mlx_0_1 00:12:41.325 valid_lft forever preferred_lft forever 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:41.325 192.168.100.9' 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:41.325 192.168.100.9' 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@457 -- # head -n 1 00:12:41.325 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:41.326 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:41.326 192.168.100.9' 00:12:41.326 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@458 -- # tail -n +2 00:12:41.326 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@458 -- # head -n 1 00:12:41.326 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:41.326 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:41.326 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:41.326 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:41.326 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:41.326 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:41.326 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:41.326 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:41.326 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:41.326 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:41.326 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:12:41.326 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=700255 00:12:41.326 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:41.326 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:41.326 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 700255 00:12:41.326 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 700255 ']' 00:12:41.326 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.326 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:41.326 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.326 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:41.326 19:03:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:41.326 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.583 19:03:33 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:41.583 19:03:33 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:12:41.583 19:03:33 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:41.583 19:03:33 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:41.583 19:03:33 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:41.583 19:03:33 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:41.583 19:03:33 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.583 19:03:33 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:41.583 19:03:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.583 19:03:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:41.583 19:03:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.583 19:03:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:41.840 19:03:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.840 19:03:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:41.840 19:03:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:41.840 19:03:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.840 19:03:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:41.840 19:03:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.840 19:03:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:41.840 19:03:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:41.840 19:03:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.840 19:03:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:41.840 19:03:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.840 19:03:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:41.840 19:03:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.840 19:03:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:41.840 19:03:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.840 19:03:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:41.840 19:03:34 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:41.840 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.045 Initializing NVMe Controllers 00:12:54.045 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:12:54.045 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:54.045 Initialization complete. Launching workers. 00:12:54.045 ======================================================== 00:12:54.045 Latency(us) 00:12:54.045 Device Information : IOPS MiB/s Average min max 00:12:54.045 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 25438.90 99.37 2515.53 653.76 16042.32 00:12:54.045 ======================================================== 00:12:54.045 Total : 25438.90 99.37 2515.53 653.76 16042.32 00:12:54.045 00:12:54.045 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:54.045 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:54.045 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:54.045 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:12:54.045 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:54.045 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:54.045 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:12:54.045 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:54.045 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:54.045 rmmod nvme_rdma 00:12:54.045 rmmod nvme_fabrics 00:12:54.045 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:54.045 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:12:54.045 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:12:54.045 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 700255 ']' 00:12:54.045 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 700255 00:12:54.045 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 700255 ']' 00:12:54.045 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 700255 00:12:54.045 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:12:54.045 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:54.045 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 700255 00:12:54.045 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:12:54.045 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:12:54.045 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 700255' 00:12:54.045 killing process with pid 700255 00:12:54.045 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 700255 00:12:54.045 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 700255 00:12:54.045 nvmf threads initialize successfully 00:12:54.045 bdev subsystem init successfully 00:12:54.045 created a nvmf target service 00:12:54.045 create targets's poll groups done 00:12:54.045 all subsystems of target started 00:12:54.045 nvmf target is running 00:12:54.045 all subsystems of target stopped 00:12:54.045 destroy targets's poll groups done 00:12:54.045 destroyed the nvmf target service 00:12:54.045 bdev subsystem finish successfully 00:12:54.045 nvmf threads destroy successfully 00:12:54.045 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:54.045 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:54.045 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:54.045 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:54.045 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:54.045 00:12:54.045 real 0m18.736s 00:12:54.045 user 0m51.895s 00:12:54.045 sys 0m4.853s 00:12:54.045 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:54.045 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:54.045 ************************************ 00:12:54.045 END TEST nvmf_example 00:12:54.045 ************************************ 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:54.046 ************************************ 00:12:54.046 START TEST nvmf_filesystem 00:12:54.046 ************************************ 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:12:54.046 * Looking for test storage... 00:12:54.046 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:12:54.046 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:54.047 #define SPDK_CONFIG_H 00:12:54.047 #define SPDK_CONFIG_APPS 1 00:12:54.047 #define SPDK_CONFIG_ARCH native 00:12:54.047 #undef SPDK_CONFIG_ASAN 00:12:54.047 #undef SPDK_CONFIG_AVAHI 00:12:54.047 #undef SPDK_CONFIG_CET 00:12:54.047 #define SPDK_CONFIG_COVERAGE 1 00:12:54.047 #define SPDK_CONFIG_CROSS_PREFIX 00:12:54.047 #undef SPDK_CONFIG_CRYPTO 00:12:54.047 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:54.047 #undef SPDK_CONFIG_CUSTOMOCF 00:12:54.047 #undef SPDK_CONFIG_DAOS 00:12:54.047 #define SPDK_CONFIG_DAOS_DIR 00:12:54.047 #define SPDK_CONFIG_DEBUG 1 00:12:54.047 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:54.047 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:12:54.047 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:54.047 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:54.047 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:54.047 #undef SPDK_CONFIG_DPDK_UADK 00:12:54.047 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:12:54.047 #define SPDK_CONFIG_EXAMPLES 1 00:12:54.047 #undef SPDK_CONFIG_FC 00:12:54.047 #define SPDK_CONFIG_FC_PATH 00:12:54.047 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:54.047 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:54.047 #undef SPDK_CONFIG_FUSE 00:12:54.047 #undef SPDK_CONFIG_FUZZER 00:12:54.047 #define SPDK_CONFIG_FUZZER_LIB 00:12:54.047 #undef SPDK_CONFIG_GOLANG 00:12:54.047 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:54.047 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:54.047 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:54.047 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:54.047 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:54.047 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:54.047 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:54.047 #define SPDK_CONFIG_IDXD 1 00:12:54.047 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:54.047 #undef SPDK_CONFIG_IPSEC_MB 00:12:54.047 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:54.047 #define SPDK_CONFIG_ISAL 1 00:12:54.047 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:54.047 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:54.047 #define SPDK_CONFIG_LIBDIR 00:12:54.047 #undef SPDK_CONFIG_LTO 00:12:54.047 #define SPDK_CONFIG_MAX_LCORES 128 00:12:54.047 #define SPDK_CONFIG_NVME_CUSE 1 00:12:54.047 #undef SPDK_CONFIG_OCF 00:12:54.047 #define SPDK_CONFIG_OCF_PATH 00:12:54.047 #define SPDK_CONFIG_OPENSSL_PATH 00:12:54.047 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:54.047 #define SPDK_CONFIG_PGO_DIR 00:12:54.047 #undef SPDK_CONFIG_PGO_USE 00:12:54.047 #define SPDK_CONFIG_PREFIX /usr/local 00:12:54.047 #undef SPDK_CONFIG_RAID5F 00:12:54.047 #undef SPDK_CONFIG_RBD 00:12:54.047 #define SPDK_CONFIG_RDMA 1 00:12:54.047 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:54.047 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:54.047 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:54.047 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:54.047 #define SPDK_CONFIG_SHARED 1 00:12:54.047 #undef SPDK_CONFIG_SMA 00:12:54.047 #define SPDK_CONFIG_TESTS 1 00:12:54.047 #undef SPDK_CONFIG_TSAN 00:12:54.047 #define SPDK_CONFIG_UBLK 1 00:12:54.047 #define SPDK_CONFIG_UBSAN 1 00:12:54.047 #undef SPDK_CONFIG_UNIT_TESTS 00:12:54.047 #undef SPDK_CONFIG_URING 00:12:54.047 #define SPDK_CONFIG_URING_PATH 00:12:54.047 #undef SPDK_CONFIG_URING_ZNS 00:12:54.047 #undef SPDK_CONFIG_USDT 00:12:54.047 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:54.047 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:54.047 #undef SPDK_CONFIG_VFIO_USER 00:12:54.047 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:54.047 #define SPDK_CONFIG_VHOST 1 00:12:54.047 #define SPDK_CONFIG_VIRTIO 1 00:12:54.047 #undef SPDK_CONFIG_VTUNE 00:12:54.047 #define SPDK_CONFIG_VTUNE_DIR 00:12:54.047 #define SPDK_CONFIG_WERROR 1 00:12:54.047 #define SPDK_CONFIG_WPDK_DIR 00:12:54.047 #undef SPDK_CONFIG_XNVME 00:12:54.047 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:54.047 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:54.048 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:12:54.049 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j96 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=rdma 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 702430 ]] 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 702430 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.Lx7pgr 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.Lx7pgr/tests/target /tmp/spdk.Lx7pgr 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=194445869056 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=201248784384 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=6802915328 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=100611096576 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=100624392192 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=13295616 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=40226676736 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=40249757696 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=23080960 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=100624064512 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=100624392192 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=327680 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=20124864512 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=20124876800 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=12288 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:12:54.050 * Looking for test storage... 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=194445869056 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=9017507840 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:54.050 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:54.051 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:54.051 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:54.051 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:54.051 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:12:54.051 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:12:54.051 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:12:54.051 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:54.051 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:54.051 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:12:54.051 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:12:54.051 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:54.051 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:54.051 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:54.051 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:54.051 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:54.051 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:54.051 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:54.051 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:54.051 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.051 19:03:45 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:12:54.051 19:03:46 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:12:59.330 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:12:59.330 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:12:59.330 Found net devices under 0000:af:00.0: mlx_0_0 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.330 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:12:59.330 Found net devices under 0000:af:00.1: mlx_0_1 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # rdma_device_init 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # uname 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:59.331 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:59.331 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:12:59.331 altname enp175s0f0np0 00:12:59.331 altname ens801f0np0 00:12:59.331 inet 192.168.100.8/24 scope global mlx_0_0 00:12:59.331 valid_lft forever preferred_lft forever 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:59.331 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:59.331 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:12:59.331 altname enp175s0f1np1 00:12:59.331 altname ens801f1np1 00:12:59.331 inet 192.168.100.9/24 scope global mlx_0_1 00:12:59.331 valid_lft forever preferred_lft forever 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:59.331 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:59.590 192.168.100.9' 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:59.590 192.168.100.9' 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@457 -- # head -n 1 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:59.590 192.168.100.9' 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@458 -- # tail -n +2 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@458 -- # head -n 1 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:59.590 ************************************ 00:12:59.590 START TEST nvmf_filesystem_no_in_capsule 00:12:59.590 ************************************ 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=705587 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 705587 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:59.590 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 705587 ']' 00:12:59.591 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.591 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:59.591 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.591 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:59.591 19:03:51 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.591 [2024-07-25 19:03:51.974650] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:59.591 [2024-07-25 19:03:51.974699] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.591 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.591 [2024-07-25 19:03:52.045216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:59.849 [2024-07-25 19:03:52.125092] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.849 [2024-07-25 19:03:52.125128] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.849 [2024-07-25 19:03:52.125135] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:59.849 [2024-07-25 19:03:52.125141] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:59.849 [2024-07-25 19:03:52.125146] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.849 [2024-07-25 19:03:52.125202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.849 [2024-07-25 19:03:52.125325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:59.849 [2024-07-25 19:03:52.125430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.849 [2024-07-25 19:03:52.125431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:00.417 19:03:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:00.417 19:03:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:13:00.417 19:03:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:00.417 19:03:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:00.417 19:03:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:00.417 19:03:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.417 19:03:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:00.417 19:03:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:13:00.417 19:03:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.417 19:03:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:00.417 [2024-07-25 19:03:52.863237] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:13:00.417 [2024-07-25 19:03:52.883452] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x14e6df0/0x14eb2e0) succeed. 00:13:00.677 [2024-07-25 19:03:52.892904] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x14e8430/0x152c980) succeed. 00:13:00.677 19:03:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.677 19:03:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:00.677 19:03:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.677 19:03:52 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:00.677 Malloc1 00:13:00.677 19:03:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.677 19:03:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:00.677 19:03:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.677 19:03:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:00.677 19:03:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.677 19:03:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:00.677 19:03:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.677 19:03:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:00.677 19:03:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.677 19:03:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:00.677 19:03:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.677 19:03:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:00.677 [2024-07-25 19:03:53.132836] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:00.677 19:03:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.677 19:03:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:00.677 19:03:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:13:00.677 19:03:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:13:00.677 19:03:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:13:00.677 19:03:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:13:00.677 19:03:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:00.677 19:03:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.677 19:03:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:00.937 19:03:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.937 19:03:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:13:00.937 { 00:13:00.937 "name": "Malloc1", 00:13:00.937 "aliases": [ 00:13:00.937 "9b85ed18-f6fc-41b4-a99a-b036cbc57438" 00:13:00.937 ], 00:13:00.937 "product_name": "Malloc disk", 00:13:00.937 "block_size": 512, 00:13:00.937 "num_blocks": 1048576, 00:13:00.937 "uuid": "9b85ed18-f6fc-41b4-a99a-b036cbc57438", 00:13:00.937 "assigned_rate_limits": { 00:13:00.937 "rw_ios_per_sec": 0, 00:13:00.937 "rw_mbytes_per_sec": 0, 00:13:00.937 "r_mbytes_per_sec": 0, 00:13:00.937 "w_mbytes_per_sec": 0 00:13:00.937 }, 00:13:00.937 "claimed": true, 00:13:00.937 "claim_type": "exclusive_write", 00:13:00.937 "zoned": false, 00:13:00.937 "supported_io_types": { 00:13:00.937 "read": true, 00:13:00.937 "write": true, 00:13:00.937 "unmap": true, 00:13:00.937 "flush": true, 00:13:00.937 "reset": true, 00:13:00.937 "nvme_admin": false, 00:13:00.937 "nvme_io": false, 00:13:00.937 "nvme_io_md": false, 00:13:00.937 "write_zeroes": true, 00:13:00.937 "zcopy": true, 00:13:00.937 "get_zone_info": false, 00:13:00.937 "zone_management": false, 00:13:00.937 "zone_append": false, 00:13:00.937 "compare": false, 00:13:00.937 "compare_and_write": false, 00:13:00.937 "abort": true, 00:13:00.937 "seek_hole": false, 00:13:00.937 "seek_data": false, 00:13:00.937 "copy": true, 00:13:00.937 "nvme_iov_md": false 00:13:00.937 }, 00:13:00.937 "memory_domains": [ 00:13:00.937 { 00:13:00.937 "dma_device_id": "system", 00:13:00.937 "dma_device_type": 1 00:13:00.937 }, 00:13:00.937 { 00:13:00.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.937 "dma_device_type": 2 00:13:00.937 } 00:13:00.937 ], 00:13:00.937 "driver_specific": {} 00:13:00.937 } 00:13:00.937 ]' 00:13:00.937 19:03:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:13:00.937 19:03:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:13:00.937 19:03:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:13:00.937 19:03:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:13:00.937 19:03:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:13:00.937 19:03:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:13:00.937 19:03:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:00.937 19:03:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:04.239 19:03:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:04.239 19:03:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:13:04.239 19:03:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:04.239 19:03:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:04.239 19:03:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:13:06.143 19:03:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:06.143 19:03:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:06.143 19:03:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:06.143 19:03:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:06.143 19:03:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.143 19:03:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:13:06.143 19:03:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:06.143 19:03:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:06.143 19:03:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:06.143 19:03:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:06.143 19:03:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:06.143 19:03:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:06.143 19:03:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:06.143 19:03:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:06.143 19:03:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:06.143 19:03:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:06.143 19:03:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:06.143 19:03:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:06.143 19:03:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:07.521 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:13:07.521 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:07.521 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:07.521 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:07.521 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:07.521 ************************************ 00:13:07.521 START TEST filesystem_ext4 00:13:07.521 ************************************ 00:13:07.521 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:07.521 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:07.521 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:07.521 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:07.521 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:13:07.521 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:07.521 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:13:07.521 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:13:07.521 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:13:07.521 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:13:07.521 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:07.521 mke2fs 1.47.0 (5-Feb-2023) 00:13:07.521 Discarding device blocks: 0/522240 done 00:13:07.521 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:07.521 Filesystem UUID: 78c2fbd7-8f3a-4cac-8b08-daa2b6ce6597 00:13:07.521 Superblock backups stored on blocks: 00:13:07.521 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:07.521 00:13:07.521 Allocating group tables: 0/64 done 00:13:07.521 Writing inode tables: 0/64 done 00:13:07.521 Creating journal (8192 blocks): done 00:13:07.521 Writing superblocks and filesystem accounting information: 0/64 done 00:13:07.521 00:13:07.521 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:13:07.521 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:07.521 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:07.521 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:13:07.521 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:07.522 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:13:07.522 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:07.522 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:07.522 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 705587 00:13:07.522 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:07.522 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:07.522 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:07.522 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:07.522 00:13:07.522 real 0m0.192s 00:13:07.522 user 0m0.026s 00:13:07.522 sys 0m0.060s 00:13:07.522 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:07.522 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:07.522 ************************************ 00:13:07.522 END TEST filesystem_ext4 00:13:07.522 ************************************ 00:13:07.522 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:07.522 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:07.522 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:07.522 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:07.522 ************************************ 00:13:07.522 START TEST filesystem_btrfs 00:13:07.522 ************************************ 00:13:07.522 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:07.522 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:07.522 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:07.522 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:07.522 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:13:07.522 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:07.522 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:13:07.522 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:13:07.522 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:13:07.522 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:13:07.522 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:07.522 btrfs-progs v6.8.1 00:13:07.522 See https://btrfs.readthedocs.io for more information. 00:13:07.522 00:13:07.522 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:07.522 NOTE: several default settings have changed in version 5.15, please make sure 00:13:07.522 this does not affect your deployments: 00:13:07.522 - DUP for metadata (-m dup) 00:13:07.522 - enabled no-holes (-O no-holes) 00:13:07.522 - enabled free-space-tree (-R free-space-tree) 00:13:07.522 00:13:07.522 Label: (null) 00:13:07.522 UUID: d689b615-b662-4625-830a-915657f6ee43 00:13:07.522 Node size: 16384 00:13:07.522 Sector size: 4096 (CPU page size: 4096) 00:13:07.522 Filesystem size: 510.00MiB 00:13:07.522 Block group profiles: 00:13:07.522 Data: single 8.00MiB 00:13:07.522 Metadata: DUP 32.00MiB 00:13:07.522 System: DUP 8.00MiB 00:13:07.522 SSD detected: yes 00:13:07.522 Zoned device: no 00:13:07.522 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:07.522 Checksum: crc32c 00:13:07.522 Number of devices: 1 00:13:07.522 Devices: 00:13:07.522 ID SIZE PATH 00:13:07.522 1 510.00MiB /dev/nvme0n1p1 00:13:07.522 00:13:07.522 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:13:07.522 19:03:59 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:07.782 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:07.782 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:13:07.782 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:07.782 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:13:07.782 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:07.782 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:07.782 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 705587 00:13:07.782 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:07.782 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:07.782 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:07.782 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:07.782 00:13:07.782 real 0m0.258s 00:13:07.782 user 0m0.018s 00:13:07.782 sys 0m0.131s 00:13:07.782 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:07.782 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:07.782 ************************************ 00:13:07.782 END TEST filesystem_btrfs 00:13:07.782 ************************************ 00:13:07.782 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:07.782 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:07.782 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:07.782 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:07.782 ************************************ 00:13:07.782 START TEST filesystem_xfs 00:13:07.782 ************************************ 00:13:07.782 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:13:07.782 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:07.782 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:07.782 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:07.782 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:13:07.782 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:07.782 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:13:07.782 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:13:07.782 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:13:07.782 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:13:07.782 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:08.041 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:08.041 = sectsz=512 attr=2, projid32bit=1 00:13:08.041 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:08.041 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:08.041 data = bsize=4096 blocks=130560, imaxpct=25 00:13:08.041 = sunit=0 swidth=0 blks 00:13:08.041 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:08.041 log =internal log bsize=4096 blocks=16384, version=2 00:13:08.041 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:08.041 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:08.041 Discarding blocks...Done. 00:13:08.041 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:13:08.041 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:08.041 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:08.041 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:08.041 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:08.041 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:08.041 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:08.041 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:08.041 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 705587 00:13:08.041 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:08.041 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:08.041 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:08.041 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:08.041 00:13:08.041 real 0m0.233s 00:13:08.041 user 0m0.029s 00:13:08.041 sys 0m0.071s 00:13:08.041 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:08.041 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:08.041 ************************************ 00:13:08.041 END TEST filesystem_xfs 00:13:08.041 ************************************ 00:13:08.041 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:08.041 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:08.041 19:04:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:10.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.576 19:04:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:10.576 19:04:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:13:10.576 19:04:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:10.576 19:04:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.576 19:04:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:10.576 19:04:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.576 19:04:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:13:10.576 19:04:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.576 19:04:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.576 19:04:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:10.576 19:04:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.576 19:04:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:10.577 19:04:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 705587 00:13:10.577 19:04:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 705587 ']' 00:13:10.577 19:04:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 705587 00:13:10.577 19:04:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:13:10.577 19:04:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:10.577 19:04:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 705587 00:13:10.577 19:04:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:10.577 19:04:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:10.577 19:04:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 705587' 00:13:10.577 killing process with pid 705587 00:13:10.577 19:04:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 705587 00:13:10.577 19:04:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 705587 00:13:11.144 19:04:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:11.144 00:13:11.144 real 0m11.393s 00:13:11.144 user 0m44.776s 00:13:11.144 sys 0m1.172s 00:13:11.144 19:04:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:11.144 19:04:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:11.144 ************************************ 00:13:11.144 END TEST nvmf_filesystem_no_in_capsule 00:13:11.144 ************************************ 00:13:11.144 19:04:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:11.144 19:04:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:11.144 19:04:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:11.144 19:04:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:11.144 ************************************ 00:13:11.144 START TEST nvmf_filesystem_in_capsule 00:13:11.144 ************************************ 00:13:11.144 19:04:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:13:11.144 19:04:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:11.144 19:04:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:11.144 19:04:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:11.144 19:04:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:11.144 19:04:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:11.144 19:04:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=707774 00:13:11.144 19:04:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 707774 00:13:11.144 19:04:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:11.144 19:04:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 707774 ']' 00:13:11.144 19:04:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.144 19:04:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:11.144 19:04:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.144 19:04:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:11.144 19:04:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:11.144 [2024-07-25 19:04:03.440800] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:11.144 [2024-07-25 19:04:03.440844] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.144 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.144 [2024-07-25 19:04:03.510398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:11.144 [2024-07-25 19:04:03.588762] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:11.144 [2024-07-25 19:04:03.588798] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:11.144 [2024-07-25 19:04:03.588805] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:11.144 [2024-07-25 19:04:03.588812] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:11.144 [2024-07-25 19:04:03.588817] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:11.144 [2024-07-25 19:04:03.588872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.144 [2024-07-25 19:04:03.588977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:11.144 [2024-07-25 19:04:03.589008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.144 [2024-07-25 19:04:03.589010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:12.083 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:12.083 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:13:12.083 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:12.083 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:12.083 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:12.083 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:12.083 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:12.083 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:13:12.083 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.083 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:12.083 [2024-07-25 19:04:04.343995] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xce4df0/0xce92e0) succeed. 00:13:12.083 [2024-07-25 19:04:04.353136] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xce6430/0xd2a980) succeed. 00:13:12.083 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.083 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:12.083 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.083 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:12.342 Malloc1 00:13:12.342 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.342 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:12.342 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.342 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:12.342 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.342 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:12.342 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.342 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:12.342 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.342 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:12.342 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.342 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:12.342 [2024-07-25 19:04:04.615230] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:12.342 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.342 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:12.342 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:13:12.342 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:13:12.342 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:13:12.342 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:13:12.342 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:12.342 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.342 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:12.342 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.342 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:13:12.342 { 00:13:12.342 "name": "Malloc1", 00:13:12.342 "aliases": [ 00:13:12.342 "f30e2eb2-d930-4098-a39e-50f7e6bebede" 00:13:12.342 ], 00:13:12.342 "product_name": "Malloc disk", 00:13:12.342 "block_size": 512, 00:13:12.342 "num_blocks": 1048576, 00:13:12.342 "uuid": "f30e2eb2-d930-4098-a39e-50f7e6bebede", 00:13:12.342 "assigned_rate_limits": { 00:13:12.342 "rw_ios_per_sec": 0, 00:13:12.342 "rw_mbytes_per_sec": 0, 00:13:12.342 "r_mbytes_per_sec": 0, 00:13:12.342 "w_mbytes_per_sec": 0 00:13:12.342 }, 00:13:12.342 "claimed": true, 00:13:12.342 "claim_type": "exclusive_write", 00:13:12.342 "zoned": false, 00:13:12.342 "supported_io_types": { 00:13:12.342 "read": true, 00:13:12.342 "write": true, 00:13:12.342 "unmap": true, 00:13:12.342 "flush": true, 00:13:12.342 "reset": true, 00:13:12.342 "nvme_admin": false, 00:13:12.342 "nvme_io": false, 00:13:12.342 "nvme_io_md": false, 00:13:12.342 "write_zeroes": true, 00:13:12.342 "zcopy": true, 00:13:12.342 "get_zone_info": false, 00:13:12.342 "zone_management": false, 00:13:12.342 "zone_append": false, 00:13:12.342 "compare": false, 00:13:12.342 "compare_and_write": false, 00:13:12.342 "abort": true, 00:13:12.342 "seek_hole": false, 00:13:12.342 "seek_data": false, 00:13:12.342 "copy": true, 00:13:12.342 "nvme_iov_md": false 00:13:12.342 }, 00:13:12.342 "memory_domains": [ 00:13:12.342 { 00:13:12.342 "dma_device_id": "system", 00:13:12.342 "dma_device_type": 1 00:13:12.342 }, 00:13:12.342 { 00:13:12.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.342 "dma_device_type": 2 00:13:12.342 } 00:13:12.342 ], 00:13:12.342 "driver_specific": {} 00:13:12.342 } 00:13:12.342 ]' 00:13:12.342 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:13:12.342 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:13:12.342 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:13:12.342 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:13:12.342 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:13:12.342 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:13:12.342 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:12.342 19:04:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:15.631 19:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:15.631 19:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:13:15.631 19:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:15.631 19:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:15.631 19:04:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:13:17.536 19:04:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:17.536 19:04:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:17.536 19:04:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:17.536 19:04:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:17.536 19:04:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:17.537 19:04:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:13:17.537 19:04:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:17.537 19:04:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:17.537 19:04:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:17.537 19:04:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:17.537 19:04:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:17.537 19:04:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:17.537 19:04:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:17.537 19:04:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:17.537 19:04:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:17.537 19:04:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:17.537 19:04:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:17.537 19:04:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:17.796 19:04:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:18.733 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:18.733 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:18.733 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:18.733 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:18.733 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:18.733 ************************************ 00:13:18.733 START TEST filesystem_in_capsule_ext4 00:13:18.733 ************************************ 00:13:18.733 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:18.733 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:18.733 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:18.733 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:18.733 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:13:18.733 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:18.733 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:13:18.733 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:13:18.733 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:13:18.733 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:13:18.733 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:18.733 mke2fs 1.47.0 (5-Feb-2023) 00:13:18.733 Discarding device blocks: 0/522240 done 00:13:18.733 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:18.733 Filesystem UUID: ed331ded-ada1-437a-a39a-1fb7909b1967 00:13:18.733 Superblock backups stored on blocks: 00:13:18.733 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:18.733 00:13:18.733 Allocating group tables: 0/64 done 00:13:18.733 Writing inode tables: 0/64 done 00:13:18.733 Creating journal (8192 blocks): done 00:13:18.733 Writing superblocks and filesystem accounting information: 0/64 done 00:13:18.733 00:13:18.733 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:13:18.733 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:18.733 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:18.733 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:18.993 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:18.993 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:18.993 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:18.993 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:18.993 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 707774 00:13:18.993 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:18.993 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:18.993 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:18.993 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:18.993 00:13:18.993 real 0m0.202s 00:13:18.993 user 0m0.025s 00:13:18.993 sys 0m0.063s 00:13:18.993 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:18.993 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:18.993 ************************************ 00:13:18.993 END TEST filesystem_in_capsule_ext4 00:13:18.993 ************************************ 00:13:18.993 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:18.993 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:18.993 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:18.993 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:18.993 ************************************ 00:13:18.993 START TEST filesystem_in_capsule_btrfs 00:13:18.993 ************************************ 00:13:18.993 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:18.993 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:18.993 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:18.993 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:18.993 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:13:18.993 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:18.993 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:13:18.993 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:13:18.993 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:13:18.993 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:13:18.993 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:18.993 btrfs-progs v6.8.1 00:13:18.993 See https://btrfs.readthedocs.io for more information. 00:13:18.993 00:13:18.993 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:18.993 NOTE: several default settings have changed in version 5.15, please make sure 00:13:18.993 this does not affect your deployments: 00:13:18.993 - DUP for metadata (-m dup) 00:13:18.993 - enabled no-holes (-O no-holes) 00:13:18.993 - enabled free-space-tree (-R free-space-tree) 00:13:18.993 00:13:18.993 Label: (null) 00:13:18.993 UUID: 65b0b007-1fd9-4e63-bcf1-e167ed1c7e6f 00:13:18.993 Node size: 16384 00:13:18.993 Sector size: 4096 (CPU page size: 4096) 00:13:18.993 Filesystem size: 510.00MiB 00:13:18.993 Block group profiles: 00:13:18.993 Data: single 8.00MiB 00:13:18.993 Metadata: DUP 32.00MiB 00:13:18.993 System: DUP 8.00MiB 00:13:18.993 SSD detected: yes 00:13:18.993 Zoned device: no 00:13:18.993 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:18.993 Checksum: crc32c 00:13:18.993 Number of devices: 1 00:13:18.993 Devices: 00:13:18.993 ID SIZE PATH 00:13:18.993 1 510.00MiB /dev/nvme0n1p1 00:13:18.993 00:13:18.993 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:13:18.993 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:19.253 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:19.253 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:19.253 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:19.253 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:19.253 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:19.253 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:19.253 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 707774 00:13:19.253 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:19.253 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:19.253 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:19.253 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:19.253 00:13:19.253 real 0m0.261s 00:13:19.253 user 0m0.025s 00:13:19.253 sys 0m0.112s 00:13:19.253 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:19.253 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:19.253 ************************************ 00:13:19.253 END TEST filesystem_in_capsule_btrfs 00:13:19.253 ************************************ 00:13:19.253 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:19.253 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:19.253 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:19.253 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:19.253 ************************************ 00:13:19.253 START TEST filesystem_in_capsule_xfs 00:13:19.253 ************************************ 00:13:19.253 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:13:19.253 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:19.253 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:19.253 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:19.253 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:13:19.253 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:19.253 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:13:19.253 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:13:19.253 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:13:19.253 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:13:19.253 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:19.513 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:19.513 = sectsz=512 attr=2, projid32bit=1 00:13:19.513 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:19.513 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:19.513 data = bsize=4096 blocks=130560, imaxpct=25 00:13:19.513 = sunit=0 swidth=0 blks 00:13:19.513 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:19.513 log =internal log bsize=4096 blocks=16384, version=2 00:13:19.513 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:19.513 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:19.513 Discarding blocks...Done. 00:13:19.513 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:13:19.513 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:19.513 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:19.513 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:19.513 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:19.513 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:19.513 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:19.513 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:19.513 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 707774 00:13:19.513 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:19.513 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:19.513 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:19.513 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:19.513 00:13:19.513 real 0m0.191s 00:13:19.513 user 0m0.026s 00:13:19.513 sys 0m0.066s 00:13:19.513 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:19.513 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:19.513 ************************************ 00:13:19.513 END TEST filesystem_in_capsule_xfs 00:13:19.513 ************************************ 00:13:19.513 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:19.513 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:19.513 19:04:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:22.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.043 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:22.043 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:13:22.043 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:22.043 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.043 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:22.043 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.043 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:13:22.043 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.043 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.043 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:22.043 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.043 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:22.043 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 707774 00:13:22.043 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 707774 ']' 00:13:22.043 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 707774 00:13:22.043 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:13:22.043 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:22.043 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 707774 00:13:22.043 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:22.043 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:22.043 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 707774' 00:13:22.043 killing process with pid 707774 00:13:22.043 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 707774 00:13:22.043 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 707774 00:13:22.302 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:22.302 00:13:22.302 real 0m11.381s 00:13:22.302 user 0m44.652s 00:13:22.302 sys 0m1.195s 00:13:22.302 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:22.302 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:22.302 ************************************ 00:13:22.302 END TEST nvmf_filesystem_in_capsule 00:13:22.302 ************************************ 00:13:22.559 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:22.559 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:22.559 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:13:22.559 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:22.559 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:22.559 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:13:22.559 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:22.559 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:22.559 rmmod nvme_rdma 00:13:22.559 rmmod nvme_fabrics 00:13:22.559 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:22.560 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:13:22.560 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:13:22.560 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:22.560 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:22.560 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:22.560 00:13:22.560 real 0m29.091s 00:13:22.560 user 1m31.296s 00:13:22.560 sys 0m6.964s 00:13:22.560 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:22.560 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:22.560 ************************************ 00:13:22.560 END TEST nvmf_filesystem 00:13:22.560 ************************************ 00:13:22.560 19:04:14 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:13:22.560 19:04:14 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:22.560 19:04:14 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:22.560 19:04:14 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:22.560 ************************************ 00:13:22.560 START TEST nvmf_target_discovery 00:13:22.560 ************************************ 00:13:22.560 19:04:14 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:13:22.560 * Looking for test storage... 00:13:22.560 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:22.560 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:22.560 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:22.560 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:22.560 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:22.560 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:22.560 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:22.560 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:22.560 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:22.560 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:22.560 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:22.560 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:22.560 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:22.560 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:13:22.560 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:13:22.560 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:22.560 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:22.560 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:22.818 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:22.818 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:22.818 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:22.818 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:22.818 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:22.818 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.818 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.818 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.819 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:22.819 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.819 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:13:22.819 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:22.819 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:22.819 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:22.819 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:22.819 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:22.819 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:22.819 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:22.819 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:22.819 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:22.819 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:22.819 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:22.819 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:22.819 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:22.819 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:22.819 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:22.819 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:22.819 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:22.819 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:22.819 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.819 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:22.819 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.819 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:22.819 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:22.819 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:13:22.819 19:04:15 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.392 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:29.392 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:13:29.392 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:29.392 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:29.392 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:29.392 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:29.392 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:29.392 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:13:29.392 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:29.392 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:13:29.392 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:13:29.392 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:13:29.392 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:13:29.392 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:13:29.392 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:13:29.392 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:29.392 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:29.392 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:29.392 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:29.392 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:29.392 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:29.392 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:29.392 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:29.392 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:29.392 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:29.392 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:29.392 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:29.392 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:29.392 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:29.392 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:29.392 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:29.392 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:29.392 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:29.392 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:13:29.393 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:13:29.393 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:13:29.393 Found net devices under 0000:af:00.0: mlx_0_0 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:13:29.393 Found net devices under 0000:af:00.1: mlx_0_1 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # rdma_device_init 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # uname 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:29.393 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:29.393 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:13:29.393 altname enp175s0f0np0 00:13:29.393 altname ens801f0np0 00:13:29.393 inet 192.168.100.8/24 scope global mlx_0_0 00:13:29.393 valid_lft forever preferred_lft forever 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:29.393 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:29.393 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:13:29.393 altname enp175s0f1np1 00:13:29.393 altname ens801f1np1 00:13:29.393 inet 192.168.100.9/24 scope global mlx_0_1 00:13:29.393 valid_lft forever preferred_lft forever 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:29.393 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:29.394 192.168.100.9' 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:29.394 192.168.100.9' 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@457 -- # head -n 1 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:29.394 192.168.100.9' 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@458 -- # tail -n +2 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@458 -- # head -n 1 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=712997 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 712997 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 712997 ']' 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:29.394 19:04:20 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.394 [2024-07-25 19:04:20.934134] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:29.394 [2024-07-25 19:04:20.934183] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.394 EAL: No free 2048 kB hugepages reported on node 1 00:13:29.394 [2024-07-25 19:04:21.002389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:29.394 [2024-07-25 19:04:21.080008] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:29.394 [2024-07-25 19:04:21.080044] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:29.394 [2024-07-25 19:04:21.080051] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:29.394 [2024-07-25 19:04:21.080057] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:29.394 [2024-07-25 19:04:21.080062] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:29.394 [2024-07-25 19:04:21.080107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.394 [2024-07-25 19:04:21.080217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.394 [2024-07-25 19:04:21.080324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.394 [2024-07-25 19:04:21.080324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:29.394 19:04:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:29.394 19:04:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:13:29.394 19:04:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:29.394 19:04:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:29.394 19:04:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.394 19:04:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:29.394 19:04:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:29.394 19:04:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.394 19:04:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.394 [2024-07-25 19:04:21.857244] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b64df0/0x1b692e0) succeed. 00:13:29.654 [2024-07-25 19:04:21.866624] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b66430/0x1baa980) succeed. 00:13:29.654 19:04:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.654 19:04:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:29.654 19:04:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:29.654 19:04:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:29.654 19:04:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.654 19:04:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.654 Null1 00:13:29.654 19:04:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.654 19:04:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:29.654 19:04:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.654 19:04:21 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.654 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.654 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:29.654 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.654 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.654 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.654 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:29.654 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.654 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.654 [2024-07-25 19:04:22.031309] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:29.654 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.654 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:29.654 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:29.654 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.654 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.654 Null2 00:13:29.654 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.654 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:29.654 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.654 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.654 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.654 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:29.654 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.654 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.654 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.655 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:13:29.655 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.655 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.655 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.655 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:29.655 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:29.655 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.655 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.655 Null3 00:13:29.655 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.655 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:29.655 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.655 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.655 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.655 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:29.655 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.655 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.655 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.655 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:13:29.655 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.655 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.655 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.655 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:29.655 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:29.655 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.655 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.655 Null4 00:13:29.655 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.655 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:29.655 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.655 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.655 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.655 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:29.655 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.655 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.915 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.915 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:13:29.915 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.915 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.915 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.915 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:29.915 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.915 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.915 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.915 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:13:29.915 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.915 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.915 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.915 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:13:29.915 00:13:29.915 Discovery Log Number of Records 6, Generation counter 6 00:13:29.915 =====Discovery Log Entry 0====== 00:13:29.915 trtype: rdma 00:13:29.915 adrfam: ipv4 00:13:29.915 subtype: current discovery subsystem 00:13:29.915 treq: not required 00:13:29.915 portid: 0 00:13:29.915 trsvcid: 4420 00:13:29.915 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:29.915 traddr: 192.168.100.8 00:13:29.915 eflags: explicit discovery connections, duplicate discovery information 00:13:29.915 rdma_prtype: not specified 00:13:29.915 rdma_qptype: connected 00:13:29.915 rdma_cms: rdma-cm 00:13:29.915 rdma_pkey: 0x0000 00:13:29.915 =====Discovery Log Entry 1====== 00:13:29.915 trtype: rdma 00:13:29.915 adrfam: ipv4 00:13:29.915 subtype: nvme subsystem 00:13:29.915 treq: not required 00:13:29.915 portid: 0 00:13:29.915 trsvcid: 4420 00:13:29.915 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:29.915 traddr: 192.168.100.8 00:13:29.915 eflags: none 00:13:29.915 rdma_prtype: not specified 00:13:29.915 rdma_qptype: connected 00:13:29.915 rdma_cms: rdma-cm 00:13:29.915 rdma_pkey: 0x0000 00:13:29.915 =====Discovery Log Entry 2====== 00:13:29.915 trtype: rdma 00:13:29.915 adrfam: ipv4 00:13:29.915 subtype: nvme subsystem 00:13:29.915 treq: not required 00:13:29.915 portid: 0 00:13:29.915 trsvcid: 4420 00:13:29.915 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:29.915 traddr: 192.168.100.8 00:13:29.915 eflags: none 00:13:29.915 rdma_prtype: not specified 00:13:29.915 rdma_qptype: connected 00:13:29.915 rdma_cms: rdma-cm 00:13:29.915 rdma_pkey: 0x0000 00:13:29.915 =====Discovery Log Entry 3====== 00:13:29.915 trtype: rdma 00:13:29.915 adrfam: ipv4 00:13:29.915 subtype: nvme subsystem 00:13:29.915 treq: not required 00:13:29.915 portid: 0 00:13:29.915 trsvcid: 4420 00:13:29.915 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:29.915 traddr: 192.168.100.8 00:13:29.915 eflags: none 00:13:29.915 rdma_prtype: not specified 00:13:29.915 rdma_qptype: connected 00:13:29.915 rdma_cms: rdma-cm 00:13:29.915 rdma_pkey: 0x0000 00:13:29.915 =====Discovery Log Entry 4====== 00:13:29.915 trtype: rdma 00:13:29.915 adrfam: ipv4 00:13:29.915 subtype: nvme subsystem 00:13:29.915 treq: not required 00:13:29.915 portid: 0 00:13:29.915 trsvcid: 4420 00:13:29.915 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:29.915 traddr: 192.168.100.8 00:13:29.915 eflags: none 00:13:29.915 rdma_prtype: not specified 00:13:29.915 rdma_qptype: connected 00:13:29.915 rdma_cms: rdma-cm 00:13:29.915 rdma_pkey: 0x0000 00:13:29.915 =====Discovery Log Entry 5====== 00:13:29.915 trtype: rdma 00:13:29.915 adrfam: ipv4 00:13:29.915 subtype: discovery subsystem referral 00:13:29.915 treq: not required 00:13:29.915 portid: 0 00:13:29.915 trsvcid: 4430 00:13:29.915 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:29.915 traddr: 192.168.100.8 00:13:29.915 eflags: none 00:13:29.915 rdma_prtype: unrecognized 00:13:29.915 rdma_qptype: unrecognized 00:13:29.915 rdma_cms: unrecognized 00:13:29.915 rdma_pkey: 0x0000 00:13:29.915 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:29.915 Perform nvmf subsystem discovery via RPC 00:13:29.915 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:29.915 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.915 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.915 [ 00:13:29.915 { 00:13:29.915 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:29.915 "subtype": "Discovery", 00:13:29.915 "listen_addresses": [ 00:13:29.915 { 00:13:29.915 "trtype": "RDMA", 00:13:29.915 "adrfam": "IPv4", 00:13:29.915 "traddr": "192.168.100.8", 00:13:29.915 "trsvcid": "4420" 00:13:29.915 } 00:13:29.915 ], 00:13:29.915 "allow_any_host": true, 00:13:29.915 "hosts": [] 00:13:29.915 }, 00:13:29.915 { 00:13:29.915 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:29.915 "subtype": "NVMe", 00:13:29.915 "listen_addresses": [ 00:13:29.915 { 00:13:29.915 "trtype": "RDMA", 00:13:29.915 "adrfam": "IPv4", 00:13:29.915 "traddr": "192.168.100.8", 00:13:29.915 "trsvcid": "4420" 00:13:29.915 } 00:13:29.915 ], 00:13:29.915 "allow_any_host": true, 00:13:29.915 "hosts": [], 00:13:29.915 "serial_number": "SPDK00000000000001", 00:13:29.915 "model_number": "SPDK bdev Controller", 00:13:29.915 "max_namespaces": 32, 00:13:29.915 "min_cntlid": 1, 00:13:29.915 "max_cntlid": 65519, 00:13:29.915 "namespaces": [ 00:13:29.915 { 00:13:29.915 "nsid": 1, 00:13:29.915 "bdev_name": "Null1", 00:13:29.915 "name": "Null1", 00:13:29.915 "nguid": "86911E5CF5B440FAB5C38080D0899CF5", 00:13:29.915 "uuid": "86911e5c-f5b4-40fa-b5c3-8080d0899cf5" 00:13:29.915 } 00:13:29.915 ] 00:13:29.915 }, 00:13:29.915 { 00:13:29.915 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:29.915 "subtype": "NVMe", 00:13:29.915 "listen_addresses": [ 00:13:29.915 { 00:13:29.915 "trtype": "RDMA", 00:13:29.915 "adrfam": "IPv4", 00:13:29.915 "traddr": "192.168.100.8", 00:13:29.915 "trsvcid": "4420" 00:13:29.915 } 00:13:29.915 ], 00:13:29.915 "allow_any_host": true, 00:13:29.915 "hosts": [], 00:13:29.915 "serial_number": "SPDK00000000000002", 00:13:29.915 "model_number": "SPDK bdev Controller", 00:13:29.915 "max_namespaces": 32, 00:13:29.915 "min_cntlid": 1, 00:13:29.915 "max_cntlid": 65519, 00:13:29.915 "namespaces": [ 00:13:29.915 { 00:13:29.915 "nsid": 1, 00:13:29.915 "bdev_name": "Null2", 00:13:29.915 "name": "Null2", 00:13:29.915 "nguid": "2D1A9A2822E84586800F0F425C8C4733", 00:13:29.915 "uuid": "2d1a9a28-22e8-4586-800f-0f425c8c4733" 00:13:29.915 } 00:13:29.915 ] 00:13:29.915 }, 00:13:29.915 { 00:13:29.915 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:29.916 "subtype": "NVMe", 00:13:29.916 "listen_addresses": [ 00:13:29.916 { 00:13:29.916 "trtype": "RDMA", 00:13:29.916 "adrfam": "IPv4", 00:13:29.916 "traddr": "192.168.100.8", 00:13:29.916 "trsvcid": "4420" 00:13:29.916 } 00:13:29.916 ], 00:13:29.916 "allow_any_host": true, 00:13:29.916 "hosts": [], 00:13:29.916 "serial_number": "SPDK00000000000003", 00:13:29.916 "model_number": "SPDK bdev Controller", 00:13:29.916 "max_namespaces": 32, 00:13:29.916 "min_cntlid": 1, 00:13:29.916 "max_cntlid": 65519, 00:13:29.916 "namespaces": [ 00:13:29.916 { 00:13:29.916 "nsid": 1, 00:13:29.916 "bdev_name": "Null3", 00:13:29.916 "name": "Null3", 00:13:29.916 "nguid": "F2CFDA9CC95749C78400882B47D5EC89", 00:13:29.916 "uuid": "f2cfda9c-c957-49c7-8400-882b47d5ec89" 00:13:29.916 } 00:13:29.916 ] 00:13:29.916 }, 00:13:29.916 { 00:13:29.916 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:29.916 "subtype": "NVMe", 00:13:29.916 "listen_addresses": [ 00:13:29.916 { 00:13:29.916 "trtype": "RDMA", 00:13:29.916 "adrfam": "IPv4", 00:13:29.916 "traddr": "192.168.100.8", 00:13:29.916 "trsvcid": "4420" 00:13:29.916 } 00:13:29.916 ], 00:13:29.916 "allow_any_host": true, 00:13:29.916 "hosts": [], 00:13:29.916 "serial_number": "SPDK00000000000004", 00:13:29.916 "model_number": "SPDK bdev Controller", 00:13:29.916 "max_namespaces": 32, 00:13:29.916 "min_cntlid": 1, 00:13:29.916 "max_cntlid": 65519, 00:13:29.916 "namespaces": [ 00:13:29.916 { 00:13:29.916 "nsid": 1, 00:13:29.916 "bdev_name": "Null4", 00:13:29.916 "name": "Null4", 00:13:29.916 "nguid": "75332F38672A45BC993102028AC1271C", 00:13:29.916 "uuid": "75332f38-672a-45bc-9931-02028ac1271c" 00:13:29.916 } 00:13:29.916 ] 00:13:29.916 } 00:13:29.916 ] 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:29.916 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.175 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:30.175 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:30.175 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:30.175 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:30.175 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:30.175 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:13:30.175 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:30.175 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:30.175 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:13:30.175 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:30.175 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:30.175 rmmod nvme_rdma 00:13:30.175 rmmod nvme_fabrics 00:13:30.175 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:30.175 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:13:30.175 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:13:30.175 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 712997 ']' 00:13:30.175 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 712997 00:13:30.175 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 712997 ']' 00:13:30.175 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 712997 00:13:30.175 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:13:30.175 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:30.175 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 712997 00:13:30.175 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:30.175 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:30.175 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 712997' 00:13:30.175 killing process with pid 712997 00:13:30.175 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 712997 00:13:30.175 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 712997 00:13:30.435 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:30.435 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:30.435 00:13:30.435 real 0m7.808s 00:13:30.435 user 0m8.368s 00:13:30.435 sys 0m4.826s 00:13:30.435 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:30.435 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:30.435 ************************************ 00:13:30.435 END TEST nvmf_target_discovery 00:13:30.435 ************************************ 00:13:30.435 19:04:22 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:13:30.435 19:04:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:30.435 19:04:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:30.435 19:04:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:30.435 ************************************ 00:13:30.435 START TEST nvmf_referrals 00:13:30.435 ************************************ 00:13:30.435 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:13:30.435 * Looking for test storage... 00:13:30.435 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:30.435 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:30.435 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:30.435 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:30.435 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:30.435 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:30.435 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:30.435 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:30.435 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:30.435 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:30.435 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:30.435 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:30.435 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.695 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:30.696 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:30.696 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:13:30.696 19:04:22 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.265 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:37.265 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:13:37.265 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:37.265 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:37.265 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:37.265 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:37.265 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:37.265 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:13:37.266 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:13:37.266 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:13:37.266 Found net devices under 0000:af:00.0: mlx_0_0 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:13:37.266 Found net devices under 0000:af:00.1: mlx_0_1 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # rdma_device_init 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # uname 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:37.266 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:37.266 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:13:37.266 altname enp175s0f0np0 00:13:37.266 altname ens801f0np0 00:13:37.266 inet 192.168.100.8/24 scope global mlx_0_0 00:13:37.266 valid_lft forever preferred_lft forever 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:37.266 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:37.267 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:37.267 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:13:37.267 altname enp175s0f1np1 00:13:37.267 altname ens801f1np1 00:13:37.267 inet 192.168.100.9/24 scope global mlx_0_1 00:13:37.267 valid_lft forever preferred_lft forever 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:37.267 192.168.100.9' 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:37.267 192.168.100.9' 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@457 -- # head -n 1 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:37.267 192.168.100.9' 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@458 -- # tail -n +2 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@458 -- # head -n 1 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=716369 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 716369 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 716369 ']' 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:37.267 19:04:28 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.267 [2024-07-25 19:04:28.826273] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:37.267 [2024-07-25 19:04:28.826320] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.267 EAL: No free 2048 kB hugepages reported on node 1 00:13:37.267 [2024-07-25 19:04:28.896399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:37.267 [2024-07-25 19:04:28.975236] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:37.267 [2024-07-25 19:04:28.975274] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:37.267 [2024-07-25 19:04:28.975281] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:37.267 [2024-07-25 19:04:28.975287] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:37.267 [2024-07-25 19:04:28.975292] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:37.267 [2024-07-25 19:04:28.975344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.267 [2024-07-25 19:04:28.975451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:37.267 [2024-07-25 19:04:28.975557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.267 [2024-07-25 19:04:28.975558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:37.267 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:37.267 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:13:37.267 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:37.267 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:37.267 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.267 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:37.267 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:37.267 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.267 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.527 [2024-07-25 19:04:29.740857] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1659df0/0x165e2e0) succeed. 00:13:37.527 [2024-07-25 19:04:29.750117] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x165b430/0x169f980) succeed. 00:13:37.527 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.527 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:13:37.527 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.527 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.527 [2024-07-25 19:04:29.875922] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:13:37.527 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.527 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:13:37.527 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.527 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.527 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.527 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:13:37.527 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.527 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.527 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.527 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:13:37.527 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.527 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.527 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.527 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:37.527 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:37.527 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.527 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.527 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.527 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:37.527 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:37.527 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:37.527 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:37.527 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:37.527 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.527 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:37.527 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.527 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.793 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:37.793 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:37.793 19:04:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.793 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.050 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:38.050 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.050 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:38.050 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.050 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:38.050 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:38.050 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:38.050 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:38.050 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.050 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:38.050 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:38.050 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.050 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:38.050 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:38.050 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:38.050 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:38.050 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:38.050 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:38.050 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:38.050 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:38.050 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:38.050 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:38.050 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:38.050 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:38.050 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:38.051 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:38.051 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:38.051 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:38.051 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:38.051 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:38.051 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:38.051 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:38.051 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:38.308 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:38.308 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:38.308 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.308 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:38.308 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.308 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:38.308 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:38.308 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:38.308 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:38.308 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.308 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:38.308 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:38.308 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.308 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:38.308 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:38.308 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:38.308 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:38.308 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:38.308 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:38.308 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:38.308 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:38.308 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:38.308 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:38.308 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:38.308 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:38.308 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:38.308 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:38.308 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:38.566 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:38.566 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:38.566 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:38.566 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:38.566 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:38.566 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:38.566 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:38.566 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:38.566 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.566 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:38.566 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.566 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:38.566 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:38.566 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.566 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:38.566 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.566 19:04:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:38.566 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:38.566 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:38.566 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:38.566 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:13:38.566 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:38.566 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:38.825 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:38.825 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:38.825 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:38.825 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:38.825 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:38.825 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:13:38.825 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:38.825 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:38.825 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:13:38.825 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:38.825 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:38.825 rmmod nvme_rdma 00:13:38.825 rmmod nvme_fabrics 00:13:38.825 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:38.825 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:13:38.825 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:13:38.825 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 716369 ']' 00:13:38.825 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 716369 00:13:38.825 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 716369 ']' 00:13:38.825 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 716369 00:13:38.825 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:13:38.825 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:38.825 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 716369 00:13:38.825 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:38.825 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:38.825 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 716369' 00:13:38.825 killing process with pid 716369 00:13:38.825 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 716369 00:13:38.825 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 716369 00:13:39.084 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:39.084 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:39.084 00:13:39.084 real 0m8.653s 00:13:39.084 user 0m12.320s 00:13:39.084 sys 0m5.207s 00:13:39.084 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:39.084 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:39.084 ************************************ 00:13:39.084 END TEST nvmf_referrals 00:13:39.084 ************************************ 00:13:39.084 19:04:31 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:13:39.084 19:04:31 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:39.084 19:04:31 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:39.084 19:04:31 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:39.084 ************************************ 00:13:39.084 START TEST nvmf_connect_disconnect 00:13:39.084 ************************************ 00:13:39.084 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:13:39.343 * Looking for test storage... 00:13:39.343 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:39.343 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:39.343 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:39.343 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:39.343 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:39.343 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:39.343 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:39.343 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:39.343 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:39.343 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:39.343 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:39.343 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:39.343 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:39.343 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:13:39.343 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:13:39.343 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:39.343 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:39.343 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:39.343 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:39.343 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:39.343 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.343 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.343 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.343 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.343 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.344 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.344 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:39.344 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.344 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:13:39.344 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:39.344 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:39.344 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:39.344 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:39.344 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:39.344 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:39.344 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:39.344 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:39.344 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:39.344 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:39.344 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:39.344 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:39.344 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:39.344 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:39.344 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:39.344 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:39.344 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.344 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:39.344 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.344 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:39.344 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:39.344 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:13:39.344 19:04:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:45.917 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:45.917 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:13:45.917 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:45.917 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:45.917 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:45.917 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:45.917 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:45.917 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:13:45.917 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:45.917 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:13:45.917 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:13:45.917 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:13:45.917 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:13:45.917 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:13:45.917 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:13:45.917 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:45.917 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:45.917 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:45.917 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:45.917 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:45.917 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:45.917 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:45.917 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:45.917 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:45.917 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:45.917 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:45.917 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:45.917 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:45.917 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:45.917 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:45.917 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:45.917 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:45.917 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:13:45.918 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:13:45.918 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:13:45.918 Found net devices under 0000:af:00.0: mlx_0_0 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:13:45.918 Found net devices under 0000:af:00.1: mlx_0_1 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # uname 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:45.918 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:45.918 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:13:45.918 altname enp175s0f0np0 00:13:45.918 altname ens801f0np0 00:13:45.918 inet 192.168.100.8/24 scope global mlx_0_0 00:13:45.918 valid_lft forever preferred_lft forever 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:45.918 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:45.918 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:13:45.918 altname enp175s0f1np1 00:13:45.918 altname ens801f1np1 00:13:45.918 inet 192.168.100.9/24 scope global mlx_0_1 00:13:45.918 valid_lft forever preferred_lft forever 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:45.918 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:45.919 192.168.100.9' 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:45.919 192.168.100.9' 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:45.919 192.168.100.9' 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=720138 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 720138 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 720138 ']' 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:45.919 19:04:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:45.919 [2024-07-25 19:04:37.568820] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:45.919 [2024-07-25 19:04:37.568873] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.919 EAL: No free 2048 kB hugepages reported on node 1 00:13:45.919 [2024-07-25 19:04:37.640315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:45.919 [2024-07-25 19:04:37.713856] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.919 [2024-07-25 19:04:37.713897] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.919 [2024-07-25 19:04:37.713910] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:45.919 [2024-07-25 19:04:37.713916] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:45.919 [2024-07-25 19:04:37.713937] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.919 [2024-07-25 19:04:37.713999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.919 [2024-07-25 19:04:37.714104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.919 [2024-07-25 19:04:37.714210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.919 [2024-07-25 19:04:37.714211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:46.178 19:04:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:46.178 19:04:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:13:46.178 19:04:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:46.178 19:04:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:46.178 19:04:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:46.178 19:04:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:46.178 19:04:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:13:46.178 19:04:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.178 19:04:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:46.178 [2024-07-25 19:04:38.457480] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:13:46.178 [2024-07-25 19:04:38.477685] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ef8df0/0x1efd2e0) succeed. 00:13:46.178 [2024-07-25 19:04:38.487104] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1efa430/0x1f3e980) succeed. 00:13:46.178 19:04:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.178 19:04:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:46.178 19:04:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.178 19:04:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:46.178 19:04:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.178 19:04:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:46.179 19:04:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:46.179 19:04:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.179 19:04:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:46.179 19:04:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.179 19:04:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:46.179 19:04:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.179 19:04:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:46.179 19:04:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.179 19:04:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:46.179 19:04:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.179 19:04:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:46.179 [2024-07-25 19:04:38.629600] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:46.179 19:04:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.179 19:04:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:13:46.179 19:04:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:13:46.179 19:04:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:54.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.420 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.693 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:23.693 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:23.693 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:23.693 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:14:23.693 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:23.693 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:23.693 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:14:23.693 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:23.693 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:23.693 rmmod nvme_rdma 00:14:23.952 rmmod nvme_fabrics 00:14:23.952 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:23.952 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:14:23.952 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:14:23.952 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 720138 ']' 00:14:23.952 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 720138 00:14:23.952 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 720138 ']' 00:14:23.952 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 720138 00:14:23.952 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:14:23.952 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:23.952 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 720138 00:14:23.952 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:23.952 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:23.952 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 720138' 00:14:23.952 killing process with pid 720138 00:14:23.952 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 720138 00:14:23.952 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 720138 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:24.212 00:14:24.212 real 0m44.970s 00:14:24.212 user 2m36.279s 00:14:24.212 sys 0m5.936s 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:24.212 ************************************ 00:14:24.212 END TEST nvmf_connect_disconnect 00:14:24.212 ************************************ 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:24.212 ************************************ 00:14:24.212 START TEST nvmf_multitarget 00:14:24.212 ************************************ 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:14:24.212 * Looking for test storage... 00:14:24.212 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:24.212 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:24.472 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:24.472 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:24.472 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:24.472 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:24.472 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:24.472 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.472 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:24.472 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.472 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:24.472 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:24.472 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:14:24.472 19:05:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:14:31.044 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:14:31.044 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:14:31.044 Found net devices under 0000:af:00.0: mlx_0_0 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:14:31.044 Found net devices under 0000:af:00.1: mlx_0_1 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # rdma_device_init 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # uname 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:31.044 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:31.045 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:31.045 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:14:31.045 altname enp175s0f0np0 00:14:31.045 altname ens801f0np0 00:14:31.045 inet 192.168.100.8/24 scope global mlx_0_0 00:14:31.045 valid_lft forever preferred_lft forever 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:31.045 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:31.045 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:14:31.045 altname enp175s0f1np1 00:14:31.045 altname ens801f1np1 00:14:31.045 inet 192.168.100.9/24 scope global mlx_0_1 00:14:31.045 valid_lft forever preferred_lft forever 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:31.045 192.168.100.9' 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:31.045 192.168.100.9' 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@457 -- # head -n 1 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:31.045 192.168.100.9' 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@458 -- # tail -n +2 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@458 -- # head -n 1 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:31.045 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:31.046 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:31.046 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:31.046 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:31.046 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=729806 00:14:31.046 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 729806 00:14:31.046 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:31.046 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 729806 ']' 00:14:31.046 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.046 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:31.046 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.046 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:31.046 19:05:22 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:31.046 [2024-07-25 19:05:22.661099] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:31.046 [2024-07-25 19:05:22.661144] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.046 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.046 [2024-07-25 19:05:22.729997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:31.046 [2024-07-25 19:05:22.808533] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.046 [2024-07-25 19:05:22.808569] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.046 [2024-07-25 19:05:22.808577] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:31.046 [2024-07-25 19:05:22.808583] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:31.046 [2024-07-25 19:05:22.808588] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.046 [2024-07-25 19:05:22.808632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.046 [2024-07-25 19:05:22.808742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:31.046 [2024-07-25 19:05:22.808848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.046 [2024-07-25 19:05:22.808849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:31.046 19:05:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:31.046 19:05:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:14:31.046 19:05:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:31.046 19:05:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:31.046 19:05:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:31.305 19:05:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.305 19:05:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:31.305 19:05:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:31.305 19:05:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:31.305 19:05:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:31.305 19:05:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:31.305 "nvmf_tgt_1" 00:14:31.305 19:05:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:31.563 "nvmf_tgt_2" 00:14:31.563 19:05:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:31.563 19:05:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:31.563 19:05:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:31.563 19:05:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:31.822 true 00:14:31.822 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:31.822 true 00:14:31.822 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:31.822 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:31.822 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:31.822 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:31.822 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:31.822 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:31.822 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:14:31.822 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:31.822 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:31.822 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:14:31.822 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:31.822 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:31.822 rmmod nvme_rdma 00:14:32.081 rmmod nvme_fabrics 00:14:32.081 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:32.081 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:14:32.081 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:14:32.081 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 729806 ']' 00:14:32.081 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 729806 00:14:32.081 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 729806 ']' 00:14:32.081 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 729806 00:14:32.081 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:14:32.081 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:32.081 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 729806 00:14:32.081 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:32.081 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:32.081 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 729806' 00:14:32.081 killing process with pid 729806 00:14:32.081 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 729806 00:14:32.081 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 729806 00:14:32.340 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:32.340 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:32.340 00:14:32.340 real 0m7.991s 00:14:32.340 user 0m9.481s 00:14:32.340 sys 0m4.901s 00:14:32.340 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:32.340 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:32.340 ************************************ 00:14:32.340 END TEST nvmf_multitarget 00:14:32.340 ************************************ 00:14:32.340 19:05:24 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:14:32.340 19:05:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:32.341 ************************************ 00:14:32.341 START TEST nvmf_rpc 00:14:32.341 ************************************ 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:14:32.341 * Looking for test storage... 00:14:32.341 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:14:32.341 19:05:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.913 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:38.913 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:14:38.913 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:38.913 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:38.913 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:38.913 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:38.913 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:38.913 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:14:38.913 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:38.913 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:14:38.913 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:14:38.913 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:14:38.913 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:14:38.913 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:14:38.913 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:14:38.913 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:38.913 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:38.913 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:38.913 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:38.913 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:38.913 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:14:38.914 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:14:38.914 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:14:38.914 Found net devices under 0000:af:00.0: mlx_0_0 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:14:38.914 Found net devices under 0000:af:00.1: mlx_0_1 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # rdma_device_init 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # uname 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:38.914 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:38.914 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:14:38.914 altname enp175s0f0np0 00:14:38.914 altname ens801f0np0 00:14:38.914 inet 192.168.100.8/24 scope global mlx_0_0 00:14:38.914 valid_lft forever preferred_lft forever 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:38.914 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:38.914 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:14:38.914 altname enp175s0f1np1 00:14:38.914 altname ens801f1np1 00:14:38.914 inet 192.168.100.9/24 scope global mlx_0_1 00:14:38.914 valid_lft forever preferred_lft forever 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:38.914 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:38.915 192.168.100.9' 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:38.915 192.168.100.9' 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@457 -- # head -n 1 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:38.915 192.168.100.9' 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@458 -- # tail -n +2 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@458 -- # head -n 1 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=733390 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 733390 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 733390 ']' 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:38.915 19:05:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.915 [2024-07-25 19:05:30.615945] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:38.915 [2024-07-25 19:05:30.615988] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.915 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.915 [2024-07-25 19:05:30.684026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:38.915 [2024-07-25 19:05:30.761755] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.915 [2024-07-25 19:05:30.761792] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.915 [2024-07-25 19:05:30.761800] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.915 [2024-07-25 19:05:30.761806] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.915 [2024-07-25 19:05:30.761815] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.915 [2024-07-25 19:05:30.761871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.915 [2024-07-25 19:05:30.761978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.915 [2024-07-25 19:05:30.762003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.915 [2024-07-25 19:05:30.762004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:39.179 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:39.179 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:14:39.179 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:39.179 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:39.179 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.179 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.179 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:39.179 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.179 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.179 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.179 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:39.179 "tick_rate": 2300000000, 00:14:39.179 "poll_groups": [ 00:14:39.179 { 00:14:39.179 "name": "nvmf_tgt_poll_group_000", 00:14:39.179 "admin_qpairs": 0, 00:14:39.179 "io_qpairs": 0, 00:14:39.179 "current_admin_qpairs": 0, 00:14:39.179 "current_io_qpairs": 0, 00:14:39.179 "pending_bdev_io": 0, 00:14:39.179 "completed_nvme_io": 0, 00:14:39.179 "transports": [] 00:14:39.179 }, 00:14:39.179 { 00:14:39.179 "name": "nvmf_tgt_poll_group_001", 00:14:39.179 "admin_qpairs": 0, 00:14:39.180 "io_qpairs": 0, 00:14:39.180 "current_admin_qpairs": 0, 00:14:39.180 "current_io_qpairs": 0, 00:14:39.180 "pending_bdev_io": 0, 00:14:39.180 "completed_nvme_io": 0, 00:14:39.180 "transports": [] 00:14:39.180 }, 00:14:39.180 { 00:14:39.180 "name": "nvmf_tgt_poll_group_002", 00:14:39.180 "admin_qpairs": 0, 00:14:39.180 "io_qpairs": 0, 00:14:39.180 "current_admin_qpairs": 0, 00:14:39.180 "current_io_qpairs": 0, 00:14:39.180 "pending_bdev_io": 0, 00:14:39.180 "completed_nvme_io": 0, 00:14:39.180 "transports": [] 00:14:39.180 }, 00:14:39.180 { 00:14:39.180 "name": "nvmf_tgt_poll_group_003", 00:14:39.180 "admin_qpairs": 0, 00:14:39.180 "io_qpairs": 0, 00:14:39.180 "current_admin_qpairs": 0, 00:14:39.180 "current_io_qpairs": 0, 00:14:39.180 "pending_bdev_io": 0, 00:14:39.180 "completed_nvme_io": 0, 00:14:39.180 "transports": [] 00:14:39.180 } 00:14:39.180 ] 00:14:39.180 }' 00:14:39.180 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:39.180 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:39.180 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:39.180 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:39.180 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:39.180 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:39.180 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:39.180 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:39.180 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.180 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.180 [2024-07-25 19:05:31.644379] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1207e50/0x120c340) succeed. 00:14:39.468 [2024-07-25 19:05:31.655075] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1209490/0x124d9e0) succeed. 00:14:39.468 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.468 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:39.468 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.468 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.468 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.468 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:39.468 "tick_rate": 2300000000, 00:14:39.468 "poll_groups": [ 00:14:39.468 { 00:14:39.468 "name": "nvmf_tgt_poll_group_000", 00:14:39.468 "admin_qpairs": 0, 00:14:39.468 "io_qpairs": 0, 00:14:39.468 "current_admin_qpairs": 0, 00:14:39.468 "current_io_qpairs": 0, 00:14:39.468 "pending_bdev_io": 0, 00:14:39.468 "completed_nvme_io": 0, 00:14:39.468 "transports": [ 00:14:39.468 { 00:14:39.468 "trtype": "RDMA", 00:14:39.468 "pending_data_buffer": 0, 00:14:39.468 "devices": [ 00:14:39.468 { 00:14:39.468 "name": "mlx5_0", 00:14:39.468 "polls": 15087, 00:14:39.468 "idle_polls": 15087, 00:14:39.468 "completions": 0, 00:14:39.468 "requests": 0, 00:14:39.468 "request_latency": 0, 00:14:39.468 "pending_free_request": 0, 00:14:39.468 "pending_rdma_read": 0, 00:14:39.468 "pending_rdma_write": 0, 00:14:39.468 "pending_rdma_send": 0, 00:14:39.468 "total_send_wrs": 0, 00:14:39.468 "send_doorbell_updates": 0, 00:14:39.468 "total_recv_wrs": 4096, 00:14:39.468 "recv_doorbell_updates": 1 00:14:39.468 }, 00:14:39.468 { 00:14:39.468 "name": "mlx5_1", 00:14:39.468 "polls": 15087, 00:14:39.468 "idle_polls": 15087, 00:14:39.468 "completions": 0, 00:14:39.468 "requests": 0, 00:14:39.468 "request_latency": 0, 00:14:39.468 "pending_free_request": 0, 00:14:39.468 "pending_rdma_read": 0, 00:14:39.468 "pending_rdma_write": 0, 00:14:39.468 "pending_rdma_send": 0, 00:14:39.468 "total_send_wrs": 0, 00:14:39.468 "send_doorbell_updates": 0, 00:14:39.468 "total_recv_wrs": 4096, 00:14:39.468 "recv_doorbell_updates": 1 00:14:39.468 } 00:14:39.468 ] 00:14:39.468 } 00:14:39.468 ] 00:14:39.468 }, 00:14:39.468 { 00:14:39.468 "name": "nvmf_tgt_poll_group_001", 00:14:39.468 "admin_qpairs": 0, 00:14:39.468 "io_qpairs": 0, 00:14:39.468 "current_admin_qpairs": 0, 00:14:39.468 "current_io_qpairs": 0, 00:14:39.468 "pending_bdev_io": 0, 00:14:39.468 "completed_nvme_io": 0, 00:14:39.468 "transports": [ 00:14:39.468 { 00:14:39.468 "trtype": "RDMA", 00:14:39.468 "pending_data_buffer": 0, 00:14:39.468 "devices": [ 00:14:39.468 { 00:14:39.468 "name": "mlx5_0", 00:14:39.468 "polls": 9803, 00:14:39.468 "idle_polls": 9803, 00:14:39.468 "completions": 0, 00:14:39.468 "requests": 0, 00:14:39.468 "request_latency": 0, 00:14:39.468 "pending_free_request": 0, 00:14:39.468 "pending_rdma_read": 0, 00:14:39.468 "pending_rdma_write": 0, 00:14:39.468 "pending_rdma_send": 0, 00:14:39.468 "total_send_wrs": 0, 00:14:39.468 "send_doorbell_updates": 0, 00:14:39.468 "total_recv_wrs": 4096, 00:14:39.468 "recv_doorbell_updates": 1 00:14:39.468 }, 00:14:39.468 { 00:14:39.468 "name": "mlx5_1", 00:14:39.468 "polls": 9803, 00:14:39.468 "idle_polls": 9803, 00:14:39.468 "completions": 0, 00:14:39.468 "requests": 0, 00:14:39.468 "request_latency": 0, 00:14:39.468 "pending_free_request": 0, 00:14:39.468 "pending_rdma_read": 0, 00:14:39.468 "pending_rdma_write": 0, 00:14:39.468 "pending_rdma_send": 0, 00:14:39.468 "total_send_wrs": 0, 00:14:39.468 "send_doorbell_updates": 0, 00:14:39.468 "total_recv_wrs": 4096, 00:14:39.468 "recv_doorbell_updates": 1 00:14:39.468 } 00:14:39.468 ] 00:14:39.468 } 00:14:39.468 ] 00:14:39.468 }, 00:14:39.468 { 00:14:39.468 "name": "nvmf_tgt_poll_group_002", 00:14:39.468 "admin_qpairs": 0, 00:14:39.468 "io_qpairs": 0, 00:14:39.468 "current_admin_qpairs": 0, 00:14:39.468 "current_io_qpairs": 0, 00:14:39.468 "pending_bdev_io": 0, 00:14:39.468 "completed_nvme_io": 0, 00:14:39.468 "transports": [ 00:14:39.468 { 00:14:39.468 "trtype": "RDMA", 00:14:39.468 "pending_data_buffer": 0, 00:14:39.468 "devices": [ 00:14:39.468 { 00:14:39.468 "name": "mlx5_0", 00:14:39.468 "polls": 5152, 00:14:39.468 "idle_polls": 5152, 00:14:39.468 "completions": 0, 00:14:39.468 "requests": 0, 00:14:39.468 "request_latency": 0, 00:14:39.468 "pending_free_request": 0, 00:14:39.468 "pending_rdma_read": 0, 00:14:39.468 "pending_rdma_write": 0, 00:14:39.468 "pending_rdma_send": 0, 00:14:39.468 "total_send_wrs": 0, 00:14:39.468 "send_doorbell_updates": 0, 00:14:39.468 "total_recv_wrs": 4096, 00:14:39.469 "recv_doorbell_updates": 1 00:14:39.469 }, 00:14:39.469 { 00:14:39.469 "name": "mlx5_1", 00:14:39.469 "polls": 5152, 00:14:39.469 "idle_polls": 5152, 00:14:39.469 "completions": 0, 00:14:39.469 "requests": 0, 00:14:39.469 "request_latency": 0, 00:14:39.469 "pending_free_request": 0, 00:14:39.469 "pending_rdma_read": 0, 00:14:39.469 "pending_rdma_write": 0, 00:14:39.469 "pending_rdma_send": 0, 00:14:39.469 "total_send_wrs": 0, 00:14:39.469 "send_doorbell_updates": 0, 00:14:39.469 "total_recv_wrs": 4096, 00:14:39.469 "recv_doorbell_updates": 1 00:14:39.469 } 00:14:39.469 ] 00:14:39.469 } 00:14:39.469 ] 00:14:39.469 }, 00:14:39.469 { 00:14:39.469 "name": "nvmf_tgt_poll_group_003", 00:14:39.469 "admin_qpairs": 0, 00:14:39.469 "io_qpairs": 0, 00:14:39.469 "current_admin_qpairs": 0, 00:14:39.469 "current_io_qpairs": 0, 00:14:39.469 "pending_bdev_io": 0, 00:14:39.469 "completed_nvme_io": 0, 00:14:39.469 "transports": [ 00:14:39.469 { 00:14:39.469 "trtype": "RDMA", 00:14:39.469 "pending_data_buffer": 0, 00:14:39.469 "devices": [ 00:14:39.469 { 00:14:39.469 "name": "mlx5_0", 00:14:39.469 "polls": 869, 00:14:39.469 "idle_polls": 869, 00:14:39.469 "completions": 0, 00:14:39.469 "requests": 0, 00:14:39.469 "request_latency": 0, 00:14:39.469 "pending_free_request": 0, 00:14:39.469 "pending_rdma_read": 0, 00:14:39.469 "pending_rdma_write": 0, 00:14:39.469 "pending_rdma_send": 0, 00:14:39.469 "total_send_wrs": 0, 00:14:39.469 "send_doorbell_updates": 0, 00:14:39.469 "total_recv_wrs": 4096, 00:14:39.469 "recv_doorbell_updates": 1 00:14:39.469 }, 00:14:39.469 { 00:14:39.469 "name": "mlx5_1", 00:14:39.469 "polls": 869, 00:14:39.469 "idle_polls": 869, 00:14:39.469 "completions": 0, 00:14:39.469 "requests": 0, 00:14:39.469 "request_latency": 0, 00:14:39.469 "pending_free_request": 0, 00:14:39.469 "pending_rdma_read": 0, 00:14:39.469 "pending_rdma_write": 0, 00:14:39.469 "pending_rdma_send": 0, 00:14:39.469 "total_send_wrs": 0, 00:14:39.469 "send_doorbell_updates": 0, 00:14:39.469 "total_recv_wrs": 4096, 00:14:39.469 "recv_doorbell_updates": 1 00:14:39.469 } 00:14:39.469 ] 00:14:39.469 } 00:14:39.469 ] 00:14:39.469 } 00:14:39.469 ] 00:14:39.469 }' 00:14:39.469 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:39.469 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:39.469 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:39.469 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:39.469 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:39.469 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:39.469 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:39.469 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:39.469 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:39.469 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:39.469 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:14:39.469 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:14:39.469 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:14:39.469 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:14:39.469 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:39.739 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:14:39.739 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:14:39.739 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:14:39.739 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:14:39.739 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:14:39.739 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:14:39.739 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:14:39.739 19:05:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:39.739 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:14:39.739 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:39.739 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:39.739 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:39.739 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.739 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.739 Malloc1 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.740 [2024-07-25 19:05:32.084370] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:14:39.740 [2024-07-25 19:05:32.130370] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562' 00:14:39.740 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:39.740 could not add new controller: failed to write to nvme-fabrics device 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.740 19:05:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:43.199 19:05:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:43.199 19:05:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:43.199 19:05:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:43.199 19:05:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:43.199 19:05:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:45.137 19:05:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:45.137 19:05:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:45.137 19:05:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:45.137 19:05:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:45.137 19:05:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:45.137 19:05:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:45.137 19:05:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:47.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.671 19:05:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:47.671 19:05:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:47.671 19:05:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:47.671 19:05:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:47.671 19:05:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:47.671 19:05:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:47.671 19:05:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:47.671 19:05:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:14:47.671 19:05:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.671 19:05:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.671 19:05:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.671 19:05:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:47.671 19:05:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:14:47.671 19:05:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:47.671 19:05:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:47.671 19:05:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:47.671 19:05:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:14:47.671 19:05:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:47.671 19:05:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:14:47.671 19:05:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:47.671 19:05:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:47.671 19:05:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:47.671 19:05:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:47.671 [2024-07-25 19:05:39.813109] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562' 00:14:47.671 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:47.671 could not add new controller: failed to write to nvme-fabrics device 00:14:47.671 19:05:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:14:47.671 19:05:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:47.671 19:05:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:47.671 19:05:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:47.671 19:05:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:47.671 19:05:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.671 19:05:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.671 19:05:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.671 19:05:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:50.958 19:05:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:50.958 19:05:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:50.958 19:05:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:50.958 19:05:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:50.958 19:05:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:52.862 19:05:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:52.862 19:05:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:52.862 19:05:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:52.862 19:05:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:52.862 19:05:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:52.862 19:05:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:52.862 19:05:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:55.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.397 19:05:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:55.397 19:05:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:55.397 19:05:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:55.397 19:05:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:55.397 19:05:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:55.397 19:05:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:55.397 19:05:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:55.397 19:05:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:55.397 19:05:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.397 19:05:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:55.397 19:05:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.397 19:05:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:55.397 19:05:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:55.397 19:05:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:55.397 19:05:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.397 19:05:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:55.397 19:05:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.397 19:05:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:55.397 19:05:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.397 19:05:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:55.397 [2024-07-25 19:05:47.455839] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:55.397 19:05:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.397 19:05:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:55.397 19:05:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.397 19:05:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:55.397 19:05:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.397 19:05:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:55.397 19:05:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.397 19:05:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:55.397 19:05:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.397 19:05:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:58.686 19:05:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:58.686 19:05:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:58.686 19:05:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:58.686 19:05:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:58.686 19:05:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:00.589 19:05:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:00.589 19:05:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:00.589 19:05:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:00.589 19:05:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:00.589 19:05:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:00.589 19:05:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:00.589 19:05:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:03.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.122 19:05:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:03.122 19:05:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:03.122 19:05:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:03.122 19:05:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:03.122 19:05:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:03.122 19:05:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:03.122 19:05:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:03.122 19:05:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:03.122 19:05:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.122 19:05:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.122 19:05:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.122 19:05:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:03.122 19:05:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.122 19:05:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.122 19:05:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.122 19:05:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:03.122 19:05:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:03.122 19:05:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.122 19:05:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.122 19:05:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.122 19:05:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:03.122 19:05:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.123 19:05:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.123 [2024-07-25 19:05:55.040170] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:03.123 19:05:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.123 19:05:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:03.123 19:05:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.123 19:05:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.123 19:05:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.123 19:05:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:03.123 19:05:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.123 19:05:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.123 19:05:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.123 19:05:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:06.410 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:06.410 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:06.410 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:06.410 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:06.410 19:05:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:07.788 19:06:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:07.788 19:06:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:07.788 19:06:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:07.788 19:06:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:07.788 19:06:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:07.788 19:06:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:07.788 19:06:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:10.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.320 19:06:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:10.320 19:06:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:10.320 19:06:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:10.320 19:06:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:10.320 19:06:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:10.320 19:06:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:10.320 19:06:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:10.320 19:06:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:10.320 19:06:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.320 19:06:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.320 19:06:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.320 19:06:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:10.320 19:06:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.320 19:06:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.320 19:06:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.320 19:06:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:10.320 19:06:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:10.320 19:06:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.320 19:06:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.320 19:06:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.320 19:06:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:10.320 19:06:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.320 19:06:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.320 [2024-07-25 19:06:02.647766] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:10.320 19:06:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.320 19:06:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:10.320 19:06:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.320 19:06:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.320 19:06:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.320 19:06:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:10.320 19:06:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.320 19:06:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.320 19:06:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.320 19:06:02 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:13.607 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:13.607 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:13.607 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:13.607 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:13.607 19:06:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:15.511 19:06:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:15.511 19:06:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:15.511 19:06:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:15.511 19:06:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:15.511 19:06:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:15.511 19:06:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:15.511 19:06:07 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:18.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.044 19:06:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:18.044 19:06:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:18.044 19:06:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:18.044 19:06:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:18.044 19:06:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:18.044 19:06:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:18.044 19:06:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:18.044 19:06:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:18.044 19:06:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.044 19:06:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.044 19:06:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.044 19:06:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:18.044 19:06:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.044 19:06:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.044 19:06:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.044 19:06:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:18.044 19:06:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:18.044 19:06:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.044 19:06:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.044 19:06:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.044 19:06:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:18.044 19:06:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.044 19:06:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.044 [2024-07-25 19:06:10.225399] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:18.044 19:06:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.044 19:06:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:18.044 19:06:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.044 19:06:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.044 19:06:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.044 19:06:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:18.044 19:06:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.044 19:06:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.044 19:06:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.044 19:06:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:21.328 19:06:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:21.328 19:06:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:21.328 19:06:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:21.328 19:06:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:21.328 19:06:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:23.229 19:06:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:23.229 19:06:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:23.229 19:06:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:23.229 19:06:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:23.229 19:06:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:23.229 19:06:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:23.229 19:06:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:25.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.759 19:06:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:25.759 19:06:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:25.759 19:06:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:25.759 19:06:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:25.759 19:06:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:25.759 19:06:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:25.759 19:06:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:25.759 19:06:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:25.759 19:06:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.759 19:06:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.759 19:06:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.759 19:06:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:25.759 19:06:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.759 19:06:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.759 19:06:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.759 19:06:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:25.759 19:06:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:25.759 19:06:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.759 19:06:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.759 19:06:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.759 19:06:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:25.759 19:06:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.759 19:06:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.759 [2024-07-25 19:06:17.789697] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:25.759 19:06:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.759 19:06:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:25.760 19:06:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.760 19:06:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.760 19:06:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.760 19:06:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:25.760 19:06:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.760 19:06:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.760 19:06:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.760 19:06:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:29.045 19:06:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:29.045 19:06:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:29.045 19:06:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:29.045 19:06:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:29.045 19:06:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:30.469 19:06:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:30.469 19:06:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:30.469 19:06:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:30.728 19:06:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:30.728 19:06:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:30.728 19:06:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:30.728 19:06:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:33.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.264 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:33.264 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:33.264 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:33.264 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:33.264 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:33.264 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:33.264 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:33.264 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:33.264 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.264 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.264 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.264 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:33.264 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.264 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.264 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.264 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:15:33.264 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:33.264 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:33.264 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.264 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.264 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.264 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:33.264 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.264 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.264 [2024-07-25 19:06:25.377297] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:33.264 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.264 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:33.264 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.265 [2024-07-25 19:06:25.425610] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.265 [2024-07-25 19:06:25.477786] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.265 [2024-07-25 19:06:25.525969] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.265 [2024-07-25 19:06:25.574165] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.265 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.266 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:33.266 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.266 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.266 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.266 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:33.266 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.266 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.266 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.266 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:33.266 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.266 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.266 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.266 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:33.266 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.266 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.266 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.266 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:15:33.266 "tick_rate": 2300000000, 00:15:33.266 "poll_groups": [ 00:15:33.266 { 00:15:33.266 "name": "nvmf_tgt_poll_group_000", 00:15:33.266 "admin_qpairs": 2, 00:15:33.266 "io_qpairs": 27, 00:15:33.266 "current_admin_qpairs": 0, 00:15:33.266 "current_io_qpairs": 0, 00:15:33.266 "pending_bdev_io": 0, 00:15:33.266 "completed_nvme_io": 125, 00:15:33.266 "transports": [ 00:15:33.266 { 00:15:33.266 "trtype": "RDMA", 00:15:33.266 "pending_data_buffer": 0, 00:15:33.266 "devices": [ 00:15:33.266 { 00:15:33.266 "name": "mlx5_0", 00:15:33.266 "polls": 6330174, 00:15:33.266 "idle_polls": 6329835, 00:15:33.266 "completions": 377, 00:15:33.266 "requests": 188, 00:15:33.266 "request_latency": 35744148, 00:15:33.266 "pending_free_request": 0, 00:15:33.266 "pending_rdma_read": 0, 00:15:33.266 "pending_rdma_write": 0, 00:15:33.266 "pending_rdma_send": 0, 00:15:33.266 "total_send_wrs": 318, 00:15:33.266 "send_doorbell_updates": 166, 00:15:33.266 "total_recv_wrs": 4284, 00:15:33.266 "recv_doorbell_updates": 166 00:15:33.266 }, 00:15:33.266 { 00:15:33.266 "name": "mlx5_1", 00:15:33.266 "polls": 6330174, 00:15:33.266 "idle_polls": 6330174, 00:15:33.266 "completions": 0, 00:15:33.266 "requests": 0, 00:15:33.266 "request_latency": 0, 00:15:33.266 "pending_free_request": 0, 00:15:33.266 "pending_rdma_read": 0, 00:15:33.266 "pending_rdma_write": 0, 00:15:33.266 "pending_rdma_send": 0, 00:15:33.266 "total_send_wrs": 0, 00:15:33.266 "send_doorbell_updates": 0, 00:15:33.266 "total_recv_wrs": 4096, 00:15:33.266 "recv_doorbell_updates": 1 00:15:33.266 } 00:15:33.266 ] 00:15:33.266 } 00:15:33.266 ] 00:15:33.266 }, 00:15:33.266 { 00:15:33.266 "name": "nvmf_tgt_poll_group_001", 00:15:33.266 "admin_qpairs": 2, 00:15:33.266 "io_qpairs": 26, 00:15:33.266 "current_admin_qpairs": 0, 00:15:33.266 "current_io_qpairs": 0, 00:15:33.266 "pending_bdev_io": 0, 00:15:33.266 "completed_nvme_io": 126, 00:15:33.266 "transports": [ 00:15:33.266 { 00:15:33.266 "trtype": "RDMA", 00:15:33.266 "pending_data_buffer": 0, 00:15:33.266 "devices": [ 00:15:33.266 { 00:15:33.266 "name": "mlx5_0", 00:15:33.266 "polls": 6414415, 00:15:33.266 "idle_polls": 6414078, 00:15:33.266 "completions": 376, 00:15:33.266 "requests": 188, 00:15:33.266 "request_latency": 35565706, 00:15:33.266 "pending_free_request": 0, 00:15:33.266 "pending_rdma_read": 0, 00:15:33.266 "pending_rdma_write": 0, 00:15:33.266 "pending_rdma_send": 0, 00:15:33.266 "total_send_wrs": 319, 00:15:33.266 "send_doorbell_updates": 164, 00:15:33.266 "total_recv_wrs": 4284, 00:15:33.266 "recv_doorbell_updates": 165 00:15:33.266 }, 00:15:33.266 { 00:15:33.266 "name": "mlx5_1", 00:15:33.266 "polls": 6414415, 00:15:33.266 "idle_polls": 6414415, 00:15:33.266 "completions": 0, 00:15:33.266 "requests": 0, 00:15:33.266 "request_latency": 0, 00:15:33.266 "pending_free_request": 0, 00:15:33.266 "pending_rdma_read": 0, 00:15:33.266 "pending_rdma_write": 0, 00:15:33.266 "pending_rdma_send": 0, 00:15:33.266 "total_send_wrs": 0, 00:15:33.266 "send_doorbell_updates": 0, 00:15:33.266 "total_recv_wrs": 4096, 00:15:33.266 "recv_doorbell_updates": 1 00:15:33.266 } 00:15:33.266 ] 00:15:33.266 } 00:15:33.266 ] 00:15:33.266 }, 00:15:33.266 { 00:15:33.266 "name": "nvmf_tgt_poll_group_002", 00:15:33.266 "admin_qpairs": 1, 00:15:33.266 "io_qpairs": 26, 00:15:33.266 "current_admin_qpairs": 0, 00:15:33.266 "current_io_qpairs": 0, 00:15:33.266 "pending_bdev_io": 0, 00:15:33.266 "completed_nvme_io": 127, 00:15:33.266 "transports": [ 00:15:33.266 { 00:15:33.266 "trtype": "RDMA", 00:15:33.266 "pending_data_buffer": 0, 00:15:33.266 "devices": [ 00:15:33.266 { 00:15:33.266 "name": "mlx5_0", 00:15:33.266 "polls": 6186156, 00:15:33.266 "idle_polls": 6185878, 00:15:33.266 "completions": 317, 00:15:33.266 "requests": 158, 00:15:33.266 "request_latency": 33817500, 00:15:33.266 "pending_free_request": 0, 00:15:33.266 "pending_rdma_read": 0, 00:15:33.266 "pending_rdma_write": 0, 00:15:33.266 "pending_rdma_send": 0, 00:15:33.266 "total_send_wrs": 275, 00:15:33.266 "send_doorbell_updates": 136, 00:15:33.266 "total_recv_wrs": 4254, 00:15:33.266 "recv_doorbell_updates": 136 00:15:33.266 }, 00:15:33.266 { 00:15:33.266 "name": "mlx5_1", 00:15:33.266 "polls": 6186156, 00:15:33.266 "idle_polls": 6186156, 00:15:33.266 "completions": 0, 00:15:33.266 "requests": 0, 00:15:33.266 "request_latency": 0, 00:15:33.266 "pending_free_request": 0, 00:15:33.266 "pending_rdma_read": 0, 00:15:33.266 "pending_rdma_write": 0, 00:15:33.266 "pending_rdma_send": 0, 00:15:33.266 "total_send_wrs": 0, 00:15:33.266 "send_doorbell_updates": 0, 00:15:33.266 "total_recv_wrs": 4096, 00:15:33.266 "recv_doorbell_updates": 1 00:15:33.266 } 00:15:33.266 ] 00:15:33.266 } 00:15:33.266 ] 00:15:33.266 }, 00:15:33.266 { 00:15:33.266 "name": "nvmf_tgt_poll_group_003", 00:15:33.266 "admin_qpairs": 2, 00:15:33.266 "io_qpairs": 26, 00:15:33.266 "current_admin_qpairs": 0, 00:15:33.266 "current_io_qpairs": 0, 00:15:33.266 "pending_bdev_io": 0, 00:15:33.266 "completed_nvme_io": 77, 00:15:33.266 "transports": [ 00:15:33.266 { 00:15:33.266 "trtype": "RDMA", 00:15:33.266 "pending_data_buffer": 0, 00:15:33.266 "devices": [ 00:15:33.266 { 00:15:33.266 "name": "mlx5_0", 00:15:33.266 "polls": 5074153, 00:15:33.266 "idle_polls": 5073900, 00:15:33.266 "completions": 274, 00:15:33.266 "requests": 137, 00:15:33.266 "request_latency": 22709374, 00:15:33.266 "pending_free_request": 0, 00:15:33.266 "pending_rdma_read": 0, 00:15:33.266 "pending_rdma_write": 0, 00:15:33.266 "pending_rdma_send": 0, 00:15:33.266 "total_send_wrs": 218, 00:15:33.266 "send_doorbell_updates": 125, 00:15:33.266 "total_recv_wrs": 4233, 00:15:33.266 "recv_doorbell_updates": 126 00:15:33.266 }, 00:15:33.266 { 00:15:33.266 "name": "mlx5_1", 00:15:33.266 "polls": 5074153, 00:15:33.266 "idle_polls": 5074153, 00:15:33.266 "completions": 0, 00:15:33.266 "requests": 0, 00:15:33.266 "request_latency": 0, 00:15:33.266 "pending_free_request": 0, 00:15:33.266 "pending_rdma_read": 0, 00:15:33.266 "pending_rdma_write": 0, 00:15:33.266 "pending_rdma_send": 0, 00:15:33.266 "total_send_wrs": 0, 00:15:33.266 "send_doorbell_updates": 0, 00:15:33.266 "total_recv_wrs": 4096, 00:15:33.266 "recv_doorbell_updates": 1 00:15:33.266 } 00:15:33.266 ] 00:15:33.266 } 00:15:33.266 ] 00:15:33.266 } 00:15:33.266 ] 00:15:33.266 }' 00:15:33.266 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:33.266 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:33.266 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:33.266 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:33.266 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:33.266 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:33.266 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:33.266 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:33.266 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:33.526 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:15:33.526 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:15:33.526 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:15:33.526 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:15:33.526 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:15:33.526 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:33.527 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # (( 1344 > 0 )) 00:15:33.527 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:15:33.527 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:15:33.527 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:33.527 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:15:33.527 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # (( 127836728 > 0 )) 00:15:33.527 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:33.527 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:15:33.527 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:33.527 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:15:33.527 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:33.527 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:33.527 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:15:33.527 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:33.527 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:33.527 rmmod nvme_rdma 00:15:33.527 rmmod nvme_fabrics 00:15:33.527 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:33.527 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:15:33.527 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:15:33.527 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 733390 ']' 00:15:33.527 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 733390 00:15:33.527 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 733390 ']' 00:15:33.527 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 733390 00:15:33.527 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:15:33.527 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:33.527 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 733390 00:15:33.527 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:33.527 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:33.527 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 733390' 00:15:33.527 killing process with pid 733390 00:15:33.527 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 733390 00:15:33.527 19:06:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 733390 00:15:33.786 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:33.786 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:33.786 00:15:33.786 real 1m1.601s 00:15:33.786 user 3m43.210s 00:15:33.786 sys 0m6.733s 00:15:33.786 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:33.786 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.786 ************************************ 00:15:33.786 END TEST nvmf_rpc 00:15:33.787 ************************************ 00:15:34.046 19:06:26 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:15:34.046 19:06:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:34.046 19:06:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:34.046 19:06:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:34.046 ************************************ 00:15:34.046 START TEST nvmf_invalid 00:15:34.046 ************************************ 00:15:34.046 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:15:34.046 * Looking for test storage... 00:15:34.046 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:34.046 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:34.046 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:15:34.046 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.046 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.046 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.046 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.046 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.046 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.046 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.046 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.046 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.046 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.046 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:15:34.046 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:15:34.046 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.046 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.046 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:34.046 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:34.046 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:34.046 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.046 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.046 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.046 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.047 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.047 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.047 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:15:34.047 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.047 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:15:34.047 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:34.047 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:34.047 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:34.047 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.047 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.047 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:34.047 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:34.047 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:34.047 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:34.047 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:34.047 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:34.047 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:15:34.047 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:15:34.047 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:15:34.047 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:34.047 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:34.047 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:34.047 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:34.047 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:34.047 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.047 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:34.047 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.047 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:34.047 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:34.047 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:15:34.047 19:06:26 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:15:40.613 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:15:40.613 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:15:40.613 Found net devices under 0000:af:00.0: mlx_0_0 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:15:40.613 Found net devices under 0000:af:00.1: mlx_0_1 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # rdma_device_init 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # uname 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:40.613 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:40.613 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:15:40.613 altname enp175s0f0np0 00:15:40.613 altname ens801f0np0 00:15:40.613 inet 192.168.100.8/24 scope global mlx_0_0 00:15:40.613 valid_lft forever preferred_lft forever 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:40.613 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:40.613 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:40.614 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:15:40.614 altname enp175s0f1np1 00:15:40.614 altname ens801f1np1 00:15:40.614 inet 192.168.100.9/24 scope global mlx_0_1 00:15:40.614 valid_lft forever preferred_lft forever 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:40.614 192.168.100.9' 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:40.614 192.168.100.9' 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@457 -- # head -n 1 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:40.614 192.168.100.9' 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@458 -- # tail -n +2 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@458 -- # head -n 1 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=746346 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 746346 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 746346 ']' 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:40.614 19:06:32 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:40.614 [2024-07-25 19:06:32.319814] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:40.614 [2024-07-25 19:06:32.319859] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.614 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.614 [2024-07-25 19:06:32.388973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:40.614 [2024-07-25 19:06:32.464911] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:40.614 [2024-07-25 19:06:32.464950] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:40.614 [2024-07-25 19:06:32.464958] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:40.614 [2024-07-25 19:06:32.464964] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:40.614 [2024-07-25 19:06:32.464969] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:40.614 [2024-07-25 19:06:32.468921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.614 [2024-07-25 19:06:32.468948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:40.614 [2024-07-25 19:06:32.469076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.614 [2024-07-25 19:06:32.469077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:40.873 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:40.873 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:15:40.873 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:40.873 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:40.873 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:40.873 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:40.873 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:40.873 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode13417 00:15:41.133 [2024-07-25 19:06:33.364833] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:41.133 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:15:41.133 { 00:15:41.133 "nqn": "nqn.2016-06.io.spdk:cnode13417", 00:15:41.133 "tgt_name": "foobar", 00:15:41.133 "method": "nvmf_create_subsystem", 00:15:41.133 "req_id": 1 00:15:41.133 } 00:15:41.133 Got JSON-RPC error response 00:15:41.133 response: 00:15:41.133 { 00:15:41.133 "code": -32603, 00:15:41.133 "message": "Unable to find target foobar" 00:15:41.133 }' 00:15:41.133 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:15:41.133 { 00:15:41.133 "nqn": "nqn.2016-06.io.spdk:cnode13417", 00:15:41.133 "tgt_name": "foobar", 00:15:41.133 "method": "nvmf_create_subsystem", 00:15:41.133 "req_id": 1 00:15:41.133 } 00:15:41.133 Got JSON-RPC error response 00:15:41.133 response: 00:15:41.133 { 00:15:41.133 "code": -32603, 00:15:41.133 "message": "Unable to find target foobar" 00:15:41.133 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:41.133 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:41.133 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode23161 00:15:41.133 [2024-07-25 19:06:33.565567] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23161: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:41.133 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:15:41.133 { 00:15:41.133 "nqn": "nqn.2016-06.io.spdk:cnode23161", 00:15:41.133 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:41.133 "method": "nvmf_create_subsystem", 00:15:41.133 "req_id": 1 00:15:41.133 } 00:15:41.133 Got JSON-RPC error response 00:15:41.133 response: 00:15:41.133 { 00:15:41.133 "code": -32602, 00:15:41.133 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:41.133 }' 00:15:41.133 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:15:41.133 { 00:15:41.133 "nqn": "nqn.2016-06.io.spdk:cnode23161", 00:15:41.133 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:41.133 "method": "nvmf_create_subsystem", 00:15:41.133 "req_id": 1 00:15:41.133 } 00:15:41.133 Got JSON-RPC error response 00:15:41.133 response: 00:15:41.133 { 00:15:41.133 "code": -32602, 00:15:41.133 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:41.133 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:41.133 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:41.133 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode26658 00:15:41.392 [2024-07-25 19:06:33.770211] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26658: invalid model number 'SPDK_Controller' 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:15:41.392 { 00:15:41.392 "nqn": "nqn.2016-06.io.spdk:cnode26658", 00:15:41.392 "model_number": "SPDK_Controller\u001f", 00:15:41.392 "method": "nvmf_create_subsystem", 00:15:41.392 "req_id": 1 00:15:41.392 } 00:15:41.392 Got JSON-RPC error response 00:15:41.392 response: 00:15:41.392 { 00:15:41.392 "code": -32602, 00:15:41.392 "message": "Invalid MN SPDK_Controller\u001f" 00:15:41.392 }' 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:15:41.392 { 00:15:41.392 "nqn": "nqn.2016-06.io.spdk:cnode26658", 00:15:41.392 "model_number": "SPDK_Controller\u001f", 00:15:41.392 "method": "nvmf_create_subsystem", 00:15:41.392 "req_id": 1 00:15:41.392 } 00:15:41.392 Got JSON-RPC error response 00:15:41.392 response: 00:15:41.392 { 00:15:41.392 "code": -32602, 00:15:41.392 "message": "Invalid MN SPDK_Controller\u001f" 00:15:41.392 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.392 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.393 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:15:41.393 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:15:41.393 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:15:41.393 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.393 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.393 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:15:41.393 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:15:41.393 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:15:41.393 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.393 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.393 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:15:41.393 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:15:41.393 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ V == \- ]] 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'VAT7H$no@/tv ^V|?mN[`' 00:15:41.652 19:06:33 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'VAT7H$no@/tv ^V|?mN[`' nqn.2016-06.io.spdk:cnode1185 00:15:41.652 [2024-07-25 19:06:34.115375] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1185: invalid serial number 'VAT7H$no@/tv ^V|?mN[`' 00:15:41.913 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:15:41.913 { 00:15:41.913 "nqn": "nqn.2016-06.io.spdk:cnode1185", 00:15:41.913 "serial_number": "VAT7H$no@/tv ^V|?mN[`", 00:15:41.913 "method": "nvmf_create_subsystem", 00:15:41.913 "req_id": 1 00:15:41.913 } 00:15:41.913 Got JSON-RPC error response 00:15:41.913 response: 00:15:41.913 { 00:15:41.913 "code": -32602, 00:15:41.913 "message": "Invalid SN VAT7H$no@/tv ^V|?mN[`" 00:15:41.913 }' 00:15:41.913 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:15:41.913 { 00:15:41.913 "nqn": "nqn.2016-06.io.spdk:cnode1185", 00:15:41.913 "serial_number": "VAT7H$no@/tv ^V|?mN[`", 00:15:41.913 "method": "nvmf_create_subsystem", 00:15:41.913 "req_id": 1 00:15:41.913 } 00:15:41.913 Got JSON-RPC error response 00:15:41.913 response: 00:15:41.913 { 00:15:41.913 "code": -32602, 00:15:41.913 "message": "Invalid SN VAT7H$no@/tv ^V|?mN[`" 00:15:41.913 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:41.913 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:15:41.913 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:15:41.913 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:41.913 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:41.913 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:41.913 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:41.913 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.913 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:15:41.913 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:15:41.913 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:15:41.913 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.913 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.913 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:15:41.913 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:15:41.913 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:15:41.913 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.913 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.913 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:15:41.913 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:15:41.913 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:15:41.913 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.913 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.913 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:15:41.913 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:15:41.913 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:15:41.913 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.913 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:15:41.914 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:15:41.915 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:15:42.175 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:15:42.175 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.175 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.175 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:15:42.175 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:15:42.175 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:15:42.175 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.175 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.175 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:15:42.175 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:15:42.175 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:15:42.175 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.175 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.175 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:15:42.175 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:15:42.175 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:15:42.175 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:42.175 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:42.175 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ Y == \- ]] 00:15:42.175 19:06:34 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'YQui /dev/null' 00:15:44.764 19:06:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:44.764 19:06:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:44.764 19:06:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:44.764 19:06:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:15:44.764 19:06:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.334 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:51.334 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:15:51.334 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:51.334 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:51.334 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:51.334 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:51.334 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:51.334 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:15:51.334 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:51.334 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:15:51.334 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:15:51.334 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:15:51.334 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:15:51.334 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:15:51.334 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:15:51.334 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:51.334 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:51.334 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:15:51.335 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:15:51.335 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:15:51.335 Found net devices under 0000:af:00.0: mlx_0_0 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:15:51.335 Found net devices under 0000:af:00.1: mlx_0_1 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # uname 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:51.335 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:51.335 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:15:51.335 altname enp175s0f0np0 00:15:51.335 altname ens801f0np0 00:15:51.335 inet 192.168.100.8/24 scope global mlx_0_0 00:15:51.335 valid_lft forever preferred_lft forever 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:51.335 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:51.335 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:51.335 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:15:51.335 altname enp175s0f1np1 00:15:51.335 altname ens801f1np1 00:15:51.335 inet 192.168.100.9/24 scope global mlx_0_1 00:15:51.336 valid_lft forever preferred_lft forever 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:51.336 192.168.100.9' 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:51.336 192.168.100.9' 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@457 -- # head -n 1 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:51.336 192.168.100.9' 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@458 -- # tail -n +2 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@458 -- # head -n 1 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=750538 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 750538 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 750538 ']' 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:51.336 19:06:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.336 [2024-07-25 19:06:42.934952] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:51.336 [2024-07-25 19:06:42.935004] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.336 EAL: No free 2048 kB hugepages reported on node 1 00:15:51.336 [2024-07-25 19:06:43.008023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:51.336 [2024-07-25 19:06:43.086643] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:51.336 [2024-07-25 19:06:43.086675] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:51.336 [2024-07-25 19:06:43.086683] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:51.336 [2024-07-25 19:06:43.086689] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:51.336 [2024-07-25 19:06:43.086694] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:51.336 [2024-07-25 19:06:43.086745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:51.336 [2024-07-25 19:06:43.086772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:51.336 [2024-07-25 19:06:43.086773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.336 19:06:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:51.336 19:06:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:15:51.336 19:06:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:51.336 19:06:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:51.336 19:06:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.596 19:06:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.596 19:06:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:51.596 19:06:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.596 19:06:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.596 [2024-07-25 19:06:43.839413] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfb6580/0xfbaa70) succeed. 00:15:51.596 [2024-07-25 19:06:43.848721] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xfb7b20/0xffc110) succeed. 00:15:51.596 19:06:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.596 19:06:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:51.596 19:06:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.596 19:06:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.596 19:06:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.596 19:06:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:51.596 19:06:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.596 19:06:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.596 [2024-07-25 19:06:43.971918] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:51.596 19:06:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.596 19:06:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:51.596 19:06:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.596 19:06:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:51.596 NULL1 00:15:51.596 19:06:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.596 19:06:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=750739 00:15:51.596 19:06:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:51.596 19:06:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:51.596 19:06:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:51.596 19:06:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:51.596 19:06:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.596 19:06:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.596 19:06:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.596 19:06:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.596 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.596 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.596 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.596 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.596 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.596 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.596 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.596 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.596 EAL: No free 2048 kB hugepages reported on node 1 00:15:51.596 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.596 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.596 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.596 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.596 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.596 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.596 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.596 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.596 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.596 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.596 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.596 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.596 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.596 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.596 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.596 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.596 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.596 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.596 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.596 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.596 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.596 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.596 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.596 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.855 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.855 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.855 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:51.855 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:51.855 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 750739 00:15:51.855 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:51.855 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.855 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:52.114 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.114 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 750739 00:15:52.114 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:52.114 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.114 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:52.372 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.372 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 750739 00:15:52.373 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:52.373 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.373 19:06:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:52.631 19:06:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.631 19:06:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 750739 00:15:52.631 19:06:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:52.631 19:06:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.631 19:06:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:53.199 19:06:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.199 19:06:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 750739 00:15:53.199 19:06:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:53.199 19:06:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.199 19:06:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:53.458 19:06:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.458 19:06:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 750739 00:15:53.458 19:06:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:53.458 19:06:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.458 19:06:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:53.716 19:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.717 19:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 750739 00:15:53.717 19:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:53.717 19:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.717 19:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:53.975 19:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.975 19:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 750739 00:15:53.975 19:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:53.975 19:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.975 19:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:54.234 19:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.234 19:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 750739 00:15:54.234 19:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:54.234 19:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.234 19:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:54.802 19:06:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.802 19:06:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 750739 00:15:54.802 19:06:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:54.802 19:06:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.802 19:06:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:55.060 19:06:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.060 19:06:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 750739 00:15:55.060 19:06:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:55.060 19:06:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.060 19:06:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:55.319 19:06:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.319 19:06:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 750739 00:15:55.319 19:06:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:55.319 19:06:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.319 19:06:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:55.578 19:06:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.578 19:06:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 750739 00:15:55.578 19:06:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:55.578 19:06:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.578 19:06:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:55.836 19:06:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.836 19:06:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 750739 00:15:55.836 19:06:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:55.836 19:06:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.836 19:06:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:56.404 19:06:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.404 19:06:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 750739 00:15:56.404 19:06:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:56.404 19:06:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.404 19:06:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:56.662 19:06:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.662 19:06:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 750739 00:15:56.662 19:06:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:56.662 19:06:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.662 19:06:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:56.921 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.921 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 750739 00:15:56.921 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:56.921 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.921 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:57.179 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.179 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 750739 00:15:57.179 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:57.179 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.179 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:57.745 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.745 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 750739 00:15:57.745 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:57.745 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.745 19:06:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:58.002 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.002 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 750739 00:15:58.002 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:58.002 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.003 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:58.261 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.261 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 750739 00:15:58.261 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:58.261 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.261 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:58.519 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.519 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 750739 00:15:58.519 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:58.519 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.519 19:06:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:58.777 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.777 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 750739 00:15:58.777 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:58.777 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.777 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:59.344 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.344 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 750739 00:15:59.344 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:59.344 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.344 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:59.602 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.602 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 750739 00:15:59.602 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:59.602 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.602 19:06:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:59.861 19:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.861 19:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 750739 00:15:59.861 19:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:59.861 19:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.861 19:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:00.119 19:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.119 19:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 750739 00:16:00.119 19:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:00.119 19:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.119 19:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:00.686 19:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.686 19:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 750739 00:16:00.686 19:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:00.686 19:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.686 19:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:00.944 19:06:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.944 19:06:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 750739 00:16:00.945 19:06:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:00.945 19:06:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.945 19:06:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.202 19:06:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.202 19:06:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 750739 00:16:01.202 19:06:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:01.202 19:06:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.202 19:06:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.460 19:06:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.460 19:06:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 750739 00:16:01.460 19:06:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:01.460 19:06:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.460 19:06:53 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.718 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.718 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 750739 00:16:01.718 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:01.718 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.718 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.718 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:02.285 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.285 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 750739 00:16:02.285 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (750739) - No such process 00:16:02.285 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 750739 00:16:02.285 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:02.285 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:02.285 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:02.285 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:02.285 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:16:02.285 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:02.285 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:02.285 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:16:02.285 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:02.285 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:02.285 rmmod nvme_rdma 00:16:02.285 rmmod nvme_fabrics 00:16:02.285 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:02.285 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:16:02.285 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:16:02.285 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 750538 ']' 00:16:02.285 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 750538 00:16:02.285 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 750538 ']' 00:16:02.285 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 750538 00:16:02.285 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:16:02.285 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:02.285 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 750538 00:16:02.285 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:02.285 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:02.285 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 750538' 00:16:02.285 killing process with pid 750538 00:16:02.285 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 750538 00:16:02.285 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 750538 00:16:02.543 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:02.543 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:02.543 00:16:02.543 real 0m17.959s 00:16:02.543 user 0m43.442s 00:16:02.543 sys 0m6.330s 00:16:02.543 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:02.543 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:02.543 ************************************ 00:16:02.543 END TEST nvmf_connect_stress 00:16:02.543 ************************************ 00:16:02.543 19:06:54 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:16:02.543 19:06:54 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:02.543 19:06:54 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:02.543 19:06:54 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:02.543 ************************************ 00:16:02.543 START TEST nvmf_fused_ordering 00:16:02.543 ************************************ 00:16:02.543 19:06:54 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:16:02.801 * Looking for test storage... 00:16:02.801 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:02.801 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:02.801 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:16:02.801 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:02.801 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:02.801 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:02.801 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:02.801 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:02.801 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:02.801 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:16:02.802 19:06:55 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:09.373 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:09.373 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:16:09.373 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:09.373 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:09.373 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:09.373 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:09.373 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:16:09.374 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:16:09.374 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:16:09.374 Found net devices under 0000:af:00.0: mlx_0_0 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:16:09.374 Found net devices under 0000:af:00.1: mlx_0_1 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # rdma_device_init 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # uname 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:09.374 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:09.374 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:09.374 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:16:09.374 altname enp175s0f0np0 00:16:09.374 altname ens801f0np0 00:16:09.374 inet 192.168.100.8/24 scope global mlx_0_0 00:16:09.375 valid_lft forever preferred_lft forever 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:09.375 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:09.375 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:16:09.375 altname enp175s0f1np1 00:16:09.375 altname ens801f1np1 00:16:09.375 inet 192.168.100.9/24 scope global mlx_0_1 00:16:09.375 valid_lft forever preferred_lft forever 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:09.375 192.168.100.9' 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:09.375 192.168.100.9' 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@457 -- # head -n 1 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:09.375 192.168.100.9' 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@458 -- # tail -n +2 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@458 -- # head -n 1 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=755619 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 755619 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 755619 ']' 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:09.375 19:07:00 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:09.375 [2024-07-25 19:07:00.935256] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:09.375 [2024-07-25 19:07:00.935306] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:09.375 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.375 [2024-07-25 19:07:01.004255] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.375 [2024-07-25 19:07:01.075811] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:09.375 [2024-07-25 19:07:01.075851] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:09.375 [2024-07-25 19:07:01.075857] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:09.375 [2024-07-25 19:07:01.075863] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:09.375 [2024-07-25 19:07:01.075868] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:09.375 [2024-07-25 19:07:01.075908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.375 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:09.375 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:16:09.375 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:09.375 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:09.375 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:09.375 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.375 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:09.375 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.375 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:09.375 [2024-07-25 19:07:01.838426] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11b4eb0/0x11b93a0) succeed. 00:16:09.635 [2024-07-25 19:07:01.847546] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11b63b0/0x11faa40) succeed. 00:16:09.635 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.635 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:09.635 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.635 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:09.635 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.635 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:09.635 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.635 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:09.635 [2024-07-25 19:07:01.919002] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:09.635 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.635 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:09.635 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.635 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:09.635 NULL1 00:16:09.635 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.635 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:09.635 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.635 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:09.635 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.635 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:09.635 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.635 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:09.635 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.635 19:07:01 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:09.635 [2024-07-25 19:07:01.974023] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:09.635 [2024-07-25 19:07:01.974069] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid755792 ] 00:16:09.635 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.896 Attached to nqn.2016-06.io.spdk:cnode1 00:16:09.896 Namespace ID: 1 size: 1GB 00:16:09.896 fused_ordering(0) 00:16:09.896 fused_ordering(1) 00:16:09.896 fused_ordering(2) 00:16:09.896 fused_ordering(3) 00:16:09.896 fused_ordering(4) 00:16:09.896 fused_ordering(5) 00:16:09.896 fused_ordering(6) 00:16:09.896 fused_ordering(7) 00:16:09.896 fused_ordering(8) 00:16:09.896 fused_ordering(9) 00:16:09.896 fused_ordering(10) 00:16:09.896 fused_ordering(11) 00:16:09.896 fused_ordering(12) 00:16:09.896 fused_ordering(13) 00:16:09.896 fused_ordering(14) 00:16:09.896 fused_ordering(15) 00:16:09.896 fused_ordering(16) 00:16:09.896 fused_ordering(17) 00:16:09.896 fused_ordering(18) 00:16:09.896 fused_ordering(19) 00:16:09.896 fused_ordering(20) 00:16:09.896 fused_ordering(21) 00:16:09.896 fused_ordering(22) 00:16:09.896 fused_ordering(23) 00:16:09.896 fused_ordering(24) 00:16:09.896 fused_ordering(25) 00:16:09.896 fused_ordering(26) 00:16:09.896 fused_ordering(27) 00:16:09.896 fused_ordering(28) 00:16:09.896 fused_ordering(29) 00:16:09.896 fused_ordering(30) 00:16:09.896 fused_ordering(31) 00:16:09.896 fused_ordering(32) 00:16:09.896 fused_ordering(33) 00:16:09.896 fused_ordering(34) 00:16:09.896 fused_ordering(35) 00:16:09.896 fused_ordering(36) 00:16:09.896 fused_ordering(37) 00:16:09.896 fused_ordering(38) 00:16:09.896 fused_ordering(39) 00:16:09.896 fused_ordering(40) 00:16:09.896 fused_ordering(41) 00:16:09.896 fused_ordering(42) 00:16:09.896 fused_ordering(43) 00:16:09.896 fused_ordering(44) 00:16:09.896 fused_ordering(45) 00:16:09.896 fused_ordering(46) 00:16:09.896 fused_ordering(47) 00:16:09.896 fused_ordering(48) 00:16:09.896 fused_ordering(49) 00:16:09.896 fused_ordering(50) 00:16:09.896 fused_ordering(51) 00:16:09.896 fused_ordering(52) 00:16:09.896 fused_ordering(53) 00:16:09.896 fused_ordering(54) 00:16:09.896 fused_ordering(55) 00:16:09.896 fused_ordering(56) 00:16:09.896 fused_ordering(57) 00:16:09.896 fused_ordering(58) 00:16:09.896 fused_ordering(59) 00:16:09.896 fused_ordering(60) 00:16:09.896 fused_ordering(61) 00:16:09.896 fused_ordering(62) 00:16:09.896 fused_ordering(63) 00:16:09.896 fused_ordering(64) 00:16:09.896 fused_ordering(65) 00:16:09.896 fused_ordering(66) 00:16:09.896 fused_ordering(67) 00:16:09.896 fused_ordering(68) 00:16:09.896 fused_ordering(69) 00:16:09.896 fused_ordering(70) 00:16:09.896 fused_ordering(71) 00:16:09.896 fused_ordering(72) 00:16:09.896 fused_ordering(73) 00:16:09.896 fused_ordering(74) 00:16:09.896 fused_ordering(75) 00:16:09.896 fused_ordering(76) 00:16:09.896 fused_ordering(77) 00:16:09.896 fused_ordering(78) 00:16:09.896 fused_ordering(79) 00:16:09.896 fused_ordering(80) 00:16:09.896 fused_ordering(81) 00:16:09.896 fused_ordering(82) 00:16:09.896 fused_ordering(83) 00:16:09.896 fused_ordering(84) 00:16:09.896 fused_ordering(85) 00:16:09.896 fused_ordering(86) 00:16:09.896 fused_ordering(87) 00:16:09.896 fused_ordering(88) 00:16:09.896 fused_ordering(89) 00:16:09.896 fused_ordering(90) 00:16:09.896 fused_ordering(91) 00:16:09.896 fused_ordering(92) 00:16:09.896 fused_ordering(93) 00:16:09.896 fused_ordering(94) 00:16:09.896 fused_ordering(95) 00:16:09.896 fused_ordering(96) 00:16:09.896 fused_ordering(97) 00:16:09.896 fused_ordering(98) 00:16:09.896 fused_ordering(99) 00:16:09.896 fused_ordering(100) 00:16:09.896 fused_ordering(101) 00:16:09.896 fused_ordering(102) 00:16:09.896 fused_ordering(103) 00:16:09.896 fused_ordering(104) 00:16:09.896 fused_ordering(105) 00:16:09.896 fused_ordering(106) 00:16:09.896 fused_ordering(107) 00:16:09.896 fused_ordering(108) 00:16:09.896 fused_ordering(109) 00:16:09.896 fused_ordering(110) 00:16:09.896 fused_ordering(111) 00:16:09.896 fused_ordering(112) 00:16:09.896 fused_ordering(113) 00:16:09.896 fused_ordering(114) 00:16:09.896 fused_ordering(115) 00:16:09.896 fused_ordering(116) 00:16:09.896 fused_ordering(117) 00:16:09.896 fused_ordering(118) 00:16:09.896 fused_ordering(119) 00:16:09.896 fused_ordering(120) 00:16:09.896 fused_ordering(121) 00:16:09.896 fused_ordering(122) 00:16:09.896 fused_ordering(123) 00:16:09.896 fused_ordering(124) 00:16:09.896 fused_ordering(125) 00:16:09.896 fused_ordering(126) 00:16:09.896 fused_ordering(127) 00:16:09.896 fused_ordering(128) 00:16:09.896 fused_ordering(129) 00:16:09.896 fused_ordering(130) 00:16:09.896 fused_ordering(131) 00:16:09.896 fused_ordering(132) 00:16:09.896 fused_ordering(133) 00:16:09.896 fused_ordering(134) 00:16:09.896 fused_ordering(135) 00:16:09.896 fused_ordering(136) 00:16:09.896 fused_ordering(137) 00:16:09.896 fused_ordering(138) 00:16:09.896 fused_ordering(139) 00:16:09.896 fused_ordering(140) 00:16:09.896 fused_ordering(141) 00:16:09.896 fused_ordering(142) 00:16:09.896 fused_ordering(143) 00:16:09.896 fused_ordering(144) 00:16:09.896 fused_ordering(145) 00:16:09.896 fused_ordering(146) 00:16:09.896 fused_ordering(147) 00:16:09.896 fused_ordering(148) 00:16:09.896 fused_ordering(149) 00:16:09.896 fused_ordering(150) 00:16:09.896 fused_ordering(151) 00:16:09.896 fused_ordering(152) 00:16:09.896 fused_ordering(153) 00:16:09.896 fused_ordering(154) 00:16:09.896 fused_ordering(155) 00:16:09.896 fused_ordering(156) 00:16:09.896 fused_ordering(157) 00:16:09.896 fused_ordering(158) 00:16:09.896 fused_ordering(159) 00:16:09.896 fused_ordering(160) 00:16:09.896 fused_ordering(161) 00:16:09.896 fused_ordering(162) 00:16:09.896 fused_ordering(163) 00:16:09.896 fused_ordering(164) 00:16:09.896 fused_ordering(165) 00:16:09.896 fused_ordering(166) 00:16:09.896 fused_ordering(167) 00:16:09.897 fused_ordering(168) 00:16:09.897 fused_ordering(169) 00:16:09.897 fused_ordering(170) 00:16:09.897 fused_ordering(171) 00:16:09.897 fused_ordering(172) 00:16:09.897 fused_ordering(173) 00:16:09.897 fused_ordering(174) 00:16:09.897 fused_ordering(175) 00:16:09.897 fused_ordering(176) 00:16:09.897 fused_ordering(177) 00:16:09.897 fused_ordering(178) 00:16:09.897 fused_ordering(179) 00:16:09.897 fused_ordering(180) 00:16:09.897 fused_ordering(181) 00:16:09.897 fused_ordering(182) 00:16:09.897 fused_ordering(183) 00:16:09.897 fused_ordering(184) 00:16:09.897 fused_ordering(185) 00:16:09.897 fused_ordering(186) 00:16:09.897 fused_ordering(187) 00:16:09.897 fused_ordering(188) 00:16:09.897 fused_ordering(189) 00:16:09.897 fused_ordering(190) 00:16:09.897 fused_ordering(191) 00:16:09.897 fused_ordering(192) 00:16:09.897 fused_ordering(193) 00:16:09.897 fused_ordering(194) 00:16:09.897 fused_ordering(195) 00:16:09.897 fused_ordering(196) 00:16:09.897 fused_ordering(197) 00:16:09.897 fused_ordering(198) 00:16:09.897 fused_ordering(199) 00:16:09.897 fused_ordering(200) 00:16:09.897 fused_ordering(201) 00:16:09.897 fused_ordering(202) 00:16:09.897 fused_ordering(203) 00:16:09.897 fused_ordering(204) 00:16:09.897 fused_ordering(205) 00:16:09.897 fused_ordering(206) 00:16:09.897 fused_ordering(207) 00:16:09.897 fused_ordering(208) 00:16:09.897 fused_ordering(209) 00:16:09.897 fused_ordering(210) 00:16:09.897 fused_ordering(211) 00:16:09.897 fused_ordering(212) 00:16:09.897 fused_ordering(213) 00:16:09.897 fused_ordering(214) 00:16:09.897 fused_ordering(215) 00:16:09.897 fused_ordering(216) 00:16:09.897 fused_ordering(217) 00:16:09.897 fused_ordering(218) 00:16:09.897 fused_ordering(219) 00:16:09.897 fused_ordering(220) 00:16:09.897 fused_ordering(221) 00:16:09.897 fused_ordering(222) 00:16:09.897 fused_ordering(223) 00:16:09.897 fused_ordering(224) 00:16:09.897 fused_ordering(225) 00:16:09.897 fused_ordering(226) 00:16:09.897 fused_ordering(227) 00:16:09.897 fused_ordering(228) 00:16:09.897 fused_ordering(229) 00:16:09.897 fused_ordering(230) 00:16:09.897 fused_ordering(231) 00:16:09.897 fused_ordering(232) 00:16:09.897 fused_ordering(233) 00:16:09.897 fused_ordering(234) 00:16:09.897 fused_ordering(235) 00:16:09.897 fused_ordering(236) 00:16:09.897 fused_ordering(237) 00:16:09.897 fused_ordering(238) 00:16:09.897 fused_ordering(239) 00:16:09.897 fused_ordering(240) 00:16:09.897 fused_ordering(241) 00:16:09.897 fused_ordering(242) 00:16:09.897 fused_ordering(243) 00:16:09.897 fused_ordering(244) 00:16:09.897 fused_ordering(245) 00:16:09.897 fused_ordering(246) 00:16:09.897 fused_ordering(247) 00:16:09.897 fused_ordering(248) 00:16:09.897 fused_ordering(249) 00:16:09.897 fused_ordering(250) 00:16:09.897 fused_ordering(251) 00:16:09.897 fused_ordering(252) 00:16:09.897 fused_ordering(253) 00:16:09.897 fused_ordering(254) 00:16:09.897 fused_ordering(255) 00:16:09.897 fused_ordering(256) 00:16:09.897 fused_ordering(257) 00:16:09.897 fused_ordering(258) 00:16:09.897 fused_ordering(259) 00:16:09.897 fused_ordering(260) 00:16:09.897 fused_ordering(261) 00:16:09.897 fused_ordering(262) 00:16:09.897 fused_ordering(263) 00:16:09.897 fused_ordering(264) 00:16:09.897 fused_ordering(265) 00:16:09.897 fused_ordering(266) 00:16:09.897 fused_ordering(267) 00:16:09.897 fused_ordering(268) 00:16:09.897 fused_ordering(269) 00:16:09.897 fused_ordering(270) 00:16:09.897 fused_ordering(271) 00:16:09.897 fused_ordering(272) 00:16:09.897 fused_ordering(273) 00:16:09.897 fused_ordering(274) 00:16:09.897 fused_ordering(275) 00:16:09.897 fused_ordering(276) 00:16:09.897 fused_ordering(277) 00:16:09.897 fused_ordering(278) 00:16:09.897 fused_ordering(279) 00:16:09.897 fused_ordering(280) 00:16:09.897 fused_ordering(281) 00:16:09.897 fused_ordering(282) 00:16:09.897 fused_ordering(283) 00:16:09.897 fused_ordering(284) 00:16:09.897 fused_ordering(285) 00:16:09.897 fused_ordering(286) 00:16:09.897 fused_ordering(287) 00:16:09.897 fused_ordering(288) 00:16:09.897 fused_ordering(289) 00:16:09.897 fused_ordering(290) 00:16:09.897 fused_ordering(291) 00:16:09.897 fused_ordering(292) 00:16:09.897 fused_ordering(293) 00:16:09.897 fused_ordering(294) 00:16:09.897 fused_ordering(295) 00:16:09.897 fused_ordering(296) 00:16:09.897 fused_ordering(297) 00:16:09.897 fused_ordering(298) 00:16:09.897 fused_ordering(299) 00:16:09.897 fused_ordering(300) 00:16:09.897 fused_ordering(301) 00:16:09.897 fused_ordering(302) 00:16:09.897 fused_ordering(303) 00:16:09.897 fused_ordering(304) 00:16:09.897 fused_ordering(305) 00:16:09.897 fused_ordering(306) 00:16:09.897 fused_ordering(307) 00:16:09.897 fused_ordering(308) 00:16:09.897 fused_ordering(309) 00:16:09.897 fused_ordering(310) 00:16:09.897 fused_ordering(311) 00:16:09.897 fused_ordering(312) 00:16:09.897 fused_ordering(313) 00:16:09.897 fused_ordering(314) 00:16:09.897 fused_ordering(315) 00:16:09.897 fused_ordering(316) 00:16:09.897 fused_ordering(317) 00:16:09.897 fused_ordering(318) 00:16:09.897 fused_ordering(319) 00:16:09.897 fused_ordering(320) 00:16:09.897 fused_ordering(321) 00:16:09.897 fused_ordering(322) 00:16:09.897 fused_ordering(323) 00:16:09.897 fused_ordering(324) 00:16:09.897 fused_ordering(325) 00:16:09.897 fused_ordering(326) 00:16:09.897 fused_ordering(327) 00:16:09.897 fused_ordering(328) 00:16:09.897 fused_ordering(329) 00:16:09.897 fused_ordering(330) 00:16:09.897 fused_ordering(331) 00:16:09.897 fused_ordering(332) 00:16:09.897 fused_ordering(333) 00:16:09.897 fused_ordering(334) 00:16:09.897 fused_ordering(335) 00:16:09.897 fused_ordering(336) 00:16:09.897 fused_ordering(337) 00:16:09.897 fused_ordering(338) 00:16:09.897 fused_ordering(339) 00:16:09.897 fused_ordering(340) 00:16:09.897 fused_ordering(341) 00:16:09.897 fused_ordering(342) 00:16:09.897 fused_ordering(343) 00:16:09.897 fused_ordering(344) 00:16:09.897 fused_ordering(345) 00:16:09.897 fused_ordering(346) 00:16:09.897 fused_ordering(347) 00:16:09.897 fused_ordering(348) 00:16:09.897 fused_ordering(349) 00:16:09.897 fused_ordering(350) 00:16:09.897 fused_ordering(351) 00:16:09.897 fused_ordering(352) 00:16:09.897 fused_ordering(353) 00:16:09.897 fused_ordering(354) 00:16:09.897 fused_ordering(355) 00:16:09.897 fused_ordering(356) 00:16:09.897 fused_ordering(357) 00:16:09.897 fused_ordering(358) 00:16:09.897 fused_ordering(359) 00:16:09.897 fused_ordering(360) 00:16:09.897 fused_ordering(361) 00:16:09.897 fused_ordering(362) 00:16:09.897 fused_ordering(363) 00:16:09.897 fused_ordering(364) 00:16:09.897 fused_ordering(365) 00:16:09.897 fused_ordering(366) 00:16:09.897 fused_ordering(367) 00:16:09.897 fused_ordering(368) 00:16:09.897 fused_ordering(369) 00:16:09.897 fused_ordering(370) 00:16:09.897 fused_ordering(371) 00:16:09.897 fused_ordering(372) 00:16:09.897 fused_ordering(373) 00:16:09.897 fused_ordering(374) 00:16:09.897 fused_ordering(375) 00:16:09.897 fused_ordering(376) 00:16:09.897 fused_ordering(377) 00:16:09.897 fused_ordering(378) 00:16:09.897 fused_ordering(379) 00:16:09.897 fused_ordering(380) 00:16:09.897 fused_ordering(381) 00:16:09.897 fused_ordering(382) 00:16:09.897 fused_ordering(383) 00:16:09.897 fused_ordering(384) 00:16:09.897 fused_ordering(385) 00:16:09.897 fused_ordering(386) 00:16:09.897 fused_ordering(387) 00:16:09.897 fused_ordering(388) 00:16:09.897 fused_ordering(389) 00:16:09.897 fused_ordering(390) 00:16:09.897 fused_ordering(391) 00:16:09.897 fused_ordering(392) 00:16:09.897 fused_ordering(393) 00:16:09.897 fused_ordering(394) 00:16:09.897 fused_ordering(395) 00:16:09.897 fused_ordering(396) 00:16:09.897 fused_ordering(397) 00:16:09.897 fused_ordering(398) 00:16:09.897 fused_ordering(399) 00:16:09.897 fused_ordering(400) 00:16:09.897 fused_ordering(401) 00:16:09.897 fused_ordering(402) 00:16:09.897 fused_ordering(403) 00:16:09.897 fused_ordering(404) 00:16:09.897 fused_ordering(405) 00:16:09.897 fused_ordering(406) 00:16:09.897 fused_ordering(407) 00:16:09.897 fused_ordering(408) 00:16:09.897 fused_ordering(409) 00:16:09.897 fused_ordering(410) 00:16:09.897 fused_ordering(411) 00:16:09.897 fused_ordering(412) 00:16:09.897 fused_ordering(413) 00:16:09.897 fused_ordering(414) 00:16:09.897 fused_ordering(415) 00:16:09.897 fused_ordering(416) 00:16:09.897 fused_ordering(417) 00:16:09.897 fused_ordering(418) 00:16:09.897 fused_ordering(419) 00:16:09.897 fused_ordering(420) 00:16:09.897 fused_ordering(421) 00:16:09.897 fused_ordering(422) 00:16:09.897 fused_ordering(423) 00:16:09.897 fused_ordering(424) 00:16:09.897 fused_ordering(425) 00:16:09.897 fused_ordering(426) 00:16:09.897 fused_ordering(427) 00:16:09.897 fused_ordering(428) 00:16:09.897 fused_ordering(429) 00:16:09.897 fused_ordering(430) 00:16:09.898 fused_ordering(431) 00:16:09.898 fused_ordering(432) 00:16:09.898 fused_ordering(433) 00:16:09.898 fused_ordering(434) 00:16:09.898 fused_ordering(435) 00:16:09.898 fused_ordering(436) 00:16:09.898 fused_ordering(437) 00:16:09.898 fused_ordering(438) 00:16:09.898 fused_ordering(439) 00:16:09.898 fused_ordering(440) 00:16:09.898 fused_ordering(441) 00:16:09.898 fused_ordering(442) 00:16:09.898 fused_ordering(443) 00:16:09.898 fused_ordering(444) 00:16:09.898 fused_ordering(445) 00:16:09.898 fused_ordering(446) 00:16:09.898 fused_ordering(447) 00:16:09.898 fused_ordering(448) 00:16:09.898 fused_ordering(449) 00:16:09.898 fused_ordering(450) 00:16:09.898 fused_ordering(451) 00:16:09.898 fused_ordering(452) 00:16:09.898 fused_ordering(453) 00:16:09.898 fused_ordering(454) 00:16:09.898 fused_ordering(455) 00:16:09.898 fused_ordering(456) 00:16:09.898 fused_ordering(457) 00:16:09.898 fused_ordering(458) 00:16:09.898 fused_ordering(459) 00:16:09.898 fused_ordering(460) 00:16:09.898 fused_ordering(461) 00:16:09.898 fused_ordering(462) 00:16:09.898 fused_ordering(463) 00:16:09.898 fused_ordering(464) 00:16:09.898 fused_ordering(465) 00:16:09.898 fused_ordering(466) 00:16:09.898 fused_ordering(467) 00:16:09.898 fused_ordering(468) 00:16:09.898 fused_ordering(469) 00:16:09.898 fused_ordering(470) 00:16:09.898 fused_ordering(471) 00:16:09.898 fused_ordering(472) 00:16:09.898 fused_ordering(473) 00:16:09.898 fused_ordering(474) 00:16:09.898 fused_ordering(475) 00:16:09.898 fused_ordering(476) 00:16:09.898 fused_ordering(477) 00:16:09.898 fused_ordering(478) 00:16:09.898 fused_ordering(479) 00:16:09.898 fused_ordering(480) 00:16:09.898 fused_ordering(481) 00:16:09.898 fused_ordering(482) 00:16:09.898 fused_ordering(483) 00:16:09.898 fused_ordering(484) 00:16:09.898 fused_ordering(485) 00:16:09.898 fused_ordering(486) 00:16:09.898 fused_ordering(487) 00:16:09.898 fused_ordering(488) 00:16:09.898 fused_ordering(489) 00:16:09.898 fused_ordering(490) 00:16:09.898 fused_ordering(491) 00:16:09.898 fused_ordering(492) 00:16:09.898 fused_ordering(493) 00:16:09.898 fused_ordering(494) 00:16:09.898 fused_ordering(495) 00:16:09.898 fused_ordering(496) 00:16:09.898 fused_ordering(497) 00:16:09.898 fused_ordering(498) 00:16:09.898 fused_ordering(499) 00:16:09.898 fused_ordering(500) 00:16:09.898 fused_ordering(501) 00:16:09.898 fused_ordering(502) 00:16:09.898 fused_ordering(503) 00:16:09.898 fused_ordering(504) 00:16:09.898 fused_ordering(505) 00:16:09.898 fused_ordering(506) 00:16:09.898 fused_ordering(507) 00:16:09.898 fused_ordering(508) 00:16:09.898 fused_ordering(509) 00:16:09.898 fused_ordering(510) 00:16:09.898 fused_ordering(511) 00:16:09.898 fused_ordering(512) 00:16:09.898 fused_ordering(513) 00:16:09.898 fused_ordering(514) 00:16:09.898 fused_ordering(515) 00:16:09.898 fused_ordering(516) 00:16:09.898 fused_ordering(517) 00:16:09.898 fused_ordering(518) 00:16:09.898 fused_ordering(519) 00:16:09.898 fused_ordering(520) 00:16:09.898 fused_ordering(521) 00:16:09.898 fused_ordering(522) 00:16:09.898 fused_ordering(523) 00:16:09.898 fused_ordering(524) 00:16:09.898 fused_ordering(525) 00:16:09.898 fused_ordering(526) 00:16:09.898 fused_ordering(527) 00:16:09.898 fused_ordering(528) 00:16:09.898 fused_ordering(529) 00:16:09.898 fused_ordering(530) 00:16:09.898 fused_ordering(531) 00:16:09.898 fused_ordering(532) 00:16:09.898 fused_ordering(533) 00:16:09.898 fused_ordering(534) 00:16:09.898 fused_ordering(535) 00:16:09.898 fused_ordering(536) 00:16:09.898 fused_ordering(537) 00:16:09.898 fused_ordering(538) 00:16:09.898 fused_ordering(539) 00:16:09.898 fused_ordering(540) 00:16:09.898 fused_ordering(541) 00:16:09.898 fused_ordering(542) 00:16:09.898 fused_ordering(543) 00:16:09.898 fused_ordering(544) 00:16:09.898 fused_ordering(545) 00:16:09.898 fused_ordering(546) 00:16:09.898 fused_ordering(547) 00:16:09.898 fused_ordering(548) 00:16:09.898 fused_ordering(549) 00:16:09.898 fused_ordering(550) 00:16:09.898 fused_ordering(551) 00:16:09.898 fused_ordering(552) 00:16:09.898 fused_ordering(553) 00:16:09.898 fused_ordering(554) 00:16:09.898 fused_ordering(555) 00:16:09.898 fused_ordering(556) 00:16:09.898 fused_ordering(557) 00:16:09.898 fused_ordering(558) 00:16:09.898 fused_ordering(559) 00:16:09.898 fused_ordering(560) 00:16:09.898 fused_ordering(561) 00:16:09.898 fused_ordering(562) 00:16:09.898 fused_ordering(563) 00:16:09.898 fused_ordering(564) 00:16:09.898 fused_ordering(565) 00:16:09.898 fused_ordering(566) 00:16:09.898 fused_ordering(567) 00:16:09.898 fused_ordering(568) 00:16:09.898 fused_ordering(569) 00:16:09.898 fused_ordering(570) 00:16:09.898 fused_ordering(571) 00:16:09.898 fused_ordering(572) 00:16:09.898 fused_ordering(573) 00:16:09.898 fused_ordering(574) 00:16:09.898 fused_ordering(575) 00:16:09.898 fused_ordering(576) 00:16:09.898 fused_ordering(577) 00:16:09.898 fused_ordering(578) 00:16:09.898 fused_ordering(579) 00:16:09.898 fused_ordering(580) 00:16:09.898 fused_ordering(581) 00:16:09.898 fused_ordering(582) 00:16:09.898 fused_ordering(583) 00:16:09.898 fused_ordering(584) 00:16:09.898 fused_ordering(585) 00:16:09.898 fused_ordering(586) 00:16:09.898 fused_ordering(587) 00:16:09.898 fused_ordering(588) 00:16:09.898 fused_ordering(589) 00:16:09.898 fused_ordering(590) 00:16:09.898 fused_ordering(591) 00:16:09.898 fused_ordering(592) 00:16:09.898 fused_ordering(593) 00:16:09.898 fused_ordering(594) 00:16:09.898 fused_ordering(595) 00:16:09.898 fused_ordering(596) 00:16:09.898 fused_ordering(597) 00:16:09.898 fused_ordering(598) 00:16:09.898 fused_ordering(599) 00:16:09.898 fused_ordering(600) 00:16:09.898 fused_ordering(601) 00:16:09.898 fused_ordering(602) 00:16:09.898 fused_ordering(603) 00:16:09.898 fused_ordering(604) 00:16:09.898 fused_ordering(605) 00:16:09.898 fused_ordering(606) 00:16:09.898 fused_ordering(607) 00:16:09.898 fused_ordering(608) 00:16:09.898 fused_ordering(609) 00:16:09.898 fused_ordering(610) 00:16:09.898 fused_ordering(611) 00:16:09.898 fused_ordering(612) 00:16:09.898 fused_ordering(613) 00:16:09.898 fused_ordering(614) 00:16:09.898 fused_ordering(615) 00:16:10.158 fused_ordering(616) 00:16:10.158 fused_ordering(617) 00:16:10.158 fused_ordering(618) 00:16:10.158 fused_ordering(619) 00:16:10.158 fused_ordering(620) 00:16:10.158 fused_ordering(621) 00:16:10.158 fused_ordering(622) 00:16:10.158 fused_ordering(623) 00:16:10.158 fused_ordering(624) 00:16:10.158 fused_ordering(625) 00:16:10.158 fused_ordering(626) 00:16:10.158 fused_ordering(627) 00:16:10.158 fused_ordering(628) 00:16:10.158 fused_ordering(629) 00:16:10.158 fused_ordering(630) 00:16:10.158 fused_ordering(631) 00:16:10.158 fused_ordering(632) 00:16:10.158 fused_ordering(633) 00:16:10.158 fused_ordering(634) 00:16:10.158 fused_ordering(635) 00:16:10.158 fused_ordering(636) 00:16:10.158 fused_ordering(637) 00:16:10.158 fused_ordering(638) 00:16:10.158 fused_ordering(639) 00:16:10.158 fused_ordering(640) 00:16:10.158 fused_ordering(641) 00:16:10.158 fused_ordering(642) 00:16:10.158 fused_ordering(643) 00:16:10.158 fused_ordering(644) 00:16:10.158 fused_ordering(645) 00:16:10.158 fused_ordering(646) 00:16:10.158 fused_ordering(647) 00:16:10.158 fused_ordering(648) 00:16:10.158 fused_ordering(649) 00:16:10.158 fused_ordering(650) 00:16:10.158 fused_ordering(651) 00:16:10.158 fused_ordering(652) 00:16:10.158 fused_ordering(653) 00:16:10.158 fused_ordering(654) 00:16:10.158 fused_ordering(655) 00:16:10.158 fused_ordering(656) 00:16:10.158 fused_ordering(657) 00:16:10.158 fused_ordering(658) 00:16:10.158 fused_ordering(659) 00:16:10.158 fused_ordering(660) 00:16:10.158 fused_ordering(661) 00:16:10.158 fused_ordering(662) 00:16:10.158 fused_ordering(663) 00:16:10.158 fused_ordering(664) 00:16:10.158 fused_ordering(665) 00:16:10.158 fused_ordering(666) 00:16:10.158 fused_ordering(667) 00:16:10.158 fused_ordering(668) 00:16:10.158 fused_ordering(669) 00:16:10.158 fused_ordering(670) 00:16:10.158 fused_ordering(671) 00:16:10.158 fused_ordering(672) 00:16:10.158 fused_ordering(673) 00:16:10.158 fused_ordering(674) 00:16:10.158 fused_ordering(675) 00:16:10.158 fused_ordering(676) 00:16:10.158 fused_ordering(677) 00:16:10.158 fused_ordering(678) 00:16:10.158 fused_ordering(679) 00:16:10.158 fused_ordering(680) 00:16:10.158 fused_ordering(681) 00:16:10.158 fused_ordering(682) 00:16:10.158 fused_ordering(683) 00:16:10.158 fused_ordering(684) 00:16:10.158 fused_ordering(685) 00:16:10.158 fused_ordering(686) 00:16:10.158 fused_ordering(687) 00:16:10.158 fused_ordering(688) 00:16:10.158 fused_ordering(689) 00:16:10.158 fused_ordering(690) 00:16:10.158 fused_ordering(691) 00:16:10.158 fused_ordering(692) 00:16:10.158 fused_ordering(693) 00:16:10.158 fused_ordering(694) 00:16:10.158 fused_ordering(695) 00:16:10.158 fused_ordering(696) 00:16:10.158 fused_ordering(697) 00:16:10.158 fused_ordering(698) 00:16:10.158 fused_ordering(699) 00:16:10.158 fused_ordering(700) 00:16:10.158 fused_ordering(701) 00:16:10.158 fused_ordering(702) 00:16:10.158 fused_ordering(703) 00:16:10.158 fused_ordering(704) 00:16:10.158 fused_ordering(705) 00:16:10.158 fused_ordering(706) 00:16:10.158 fused_ordering(707) 00:16:10.158 fused_ordering(708) 00:16:10.158 fused_ordering(709) 00:16:10.158 fused_ordering(710) 00:16:10.158 fused_ordering(711) 00:16:10.158 fused_ordering(712) 00:16:10.158 fused_ordering(713) 00:16:10.158 fused_ordering(714) 00:16:10.159 fused_ordering(715) 00:16:10.159 fused_ordering(716) 00:16:10.159 fused_ordering(717) 00:16:10.159 fused_ordering(718) 00:16:10.159 fused_ordering(719) 00:16:10.159 fused_ordering(720) 00:16:10.159 fused_ordering(721) 00:16:10.159 fused_ordering(722) 00:16:10.159 fused_ordering(723) 00:16:10.159 fused_ordering(724) 00:16:10.159 fused_ordering(725) 00:16:10.159 fused_ordering(726) 00:16:10.159 fused_ordering(727) 00:16:10.159 fused_ordering(728) 00:16:10.159 fused_ordering(729) 00:16:10.159 fused_ordering(730) 00:16:10.159 fused_ordering(731) 00:16:10.159 fused_ordering(732) 00:16:10.159 fused_ordering(733) 00:16:10.159 fused_ordering(734) 00:16:10.159 fused_ordering(735) 00:16:10.159 fused_ordering(736) 00:16:10.159 fused_ordering(737) 00:16:10.159 fused_ordering(738) 00:16:10.159 fused_ordering(739) 00:16:10.159 fused_ordering(740) 00:16:10.159 fused_ordering(741) 00:16:10.159 fused_ordering(742) 00:16:10.159 fused_ordering(743) 00:16:10.159 fused_ordering(744) 00:16:10.159 fused_ordering(745) 00:16:10.159 fused_ordering(746) 00:16:10.159 fused_ordering(747) 00:16:10.159 fused_ordering(748) 00:16:10.159 fused_ordering(749) 00:16:10.159 fused_ordering(750) 00:16:10.159 fused_ordering(751) 00:16:10.159 fused_ordering(752) 00:16:10.159 fused_ordering(753) 00:16:10.159 fused_ordering(754) 00:16:10.159 fused_ordering(755) 00:16:10.159 fused_ordering(756) 00:16:10.159 fused_ordering(757) 00:16:10.159 fused_ordering(758) 00:16:10.159 fused_ordering(759) 00:16:10.159 fused_ordering(760) 00:16:10.159 fused_ordering(761) 00:16:10.159 fused_ordering(762) 00:16:10.159 fused_ordering(763) 00:16:10.159 fused_ordering(764) 00:16:10.159 fused_ordering(765) 00:16:10.159 fused_ordering(766) 00:16:10.159 fused_ordering(767) 00:16:10.159 fused_ordering(768) 00:16:10.159 fused_ordering(769) 00:16:10.159 fused_ordering(770) 00:16:10.159 fused_ordering(771) 00:16:10.159 fused_ordering(772) 00:16:10.159 fused_ordering(773) 00:16:10.159 fused_ordering(774) 00:16:10.159 fused_ordering(775) 00:16:10.159 fused_ordering(776) 00:16:10.159 fused_ordering(777) 00:16:10.159 fused_ordering(778) 00:16:10.159 fused_ordering(779) 00:16:10.159 fused_ordering(780) 00:16:10.159 fused_ordering(781) 00:16:10.159 fused_ordering(782) 00:16:10.159 fused_ordering(783) 00:16:10.159 fused_ordering(784) 00:16:10.159 fused_ordering(785) 00:16:10.159 fused_ordering(786) 00:16:10.159 fused_ordering(787) 00:16:10.159 fused_ordering(788) 00:16:10.159 fused_ordering(789) 00:16:10.159 fused_ordering(790) 00:16:10.159 fused_ordering(791) 00:16:10.159 fused_ordering(792) 00:16:10.159 fused_ordering(793) 00:16:10.159 fused_ordering(794) 00:16:10.159 fused_ordering(795) 00:16:10.159 fused_ordering(796) 00:16:10.159 fused_ordering(797) 00:16:10.159 fused_ordering(798) 00:16:10.159 fused_ordering(799) 00:16:10.159 fused_ordering(800) 00:16:10.159 fused_ordering(801) 00:16:10.159 fused_ordering(802) 00:16:10.159 fused_ordering(803) 00:16:10.159 fused_ordering(804) 00:16:10.159 fused_ordering(805) 00:16:10.159 fused_ordering(806) 00:16:10.159 fused_ordering(807) 00:16:10.159 fused_ordering(808) 00:16:10.159 fused_ordering(809) 00:16:10.159 fused_ordering(810) 00:16:10.159 fused_ordering(811) 00:16:10.159 fused_ordering(812) 00:16:10.159 fused_ordering(813) 00:16:10.159 fused_ordering(814) 00:16:10.159 fused_ordering(815) 00:16:10.159 fused_ordering(816) 00:16:10.159 fused_ordering(817) 00:16:10.159 fused_ordering(818) 00:16:10.159 fused_ordering(819) 00:16:10.159 fused_ordering(820) 00:16:10.418 fused_ordering(821) 00:16:10.418 fused_ordering(822) 00:16:10.418 fused_ordering(823) 00:16:10.418 fused_ordering(824) 00:16:10.418 fused_ordering(825) 00:16:10.418 fused_ordering(826) 00:16:10.418 fused_ordering(827) 00:16:10.418 fused_ordering(828) 00:16:10.418 fused_ordering(829) 00:16:10.418 fused_ordering(830) 00:16:10.418 fused_ordering(831) 00:16:10.418 fused_ordering(832) 00:16:10.418 fused_ordering(833) 00:16:10.418 fused_ordering(834) 00:16:10.418 fused_ordering(835) 00:16:10.418 fused_ordering(836) 00:16:10.418 fused_ordering(837) 00:16:10.418 fused_ordering(838) 00:16:10.418 fused_ordering(839) 00:16:10.418 fused_ordering(840) 00:16:10.418 fused_ordering(841) 00:16:10.418 fused_ordering(842) 00:16:10.418 fused_ordering(843) 00:16:10.418 fused_ordering(844) 00:16:10.418 fused_ordering(845) 00:16:10.418 fused_ordering(846) 00:16:10.418 fused_ordering(847) 00:16:10.418 fused_ordering(848) 00:16:10.418 fused_ordering(849) 00:16:10.418 fused_ordering(850) 00:16:10.418 fused_ordering(851) 00:16:10.418 fused_ordering(852) 00:16:10.418 fused_ordering(853) 00:16:10.418 fused_ordering(854) 00:16:10.418 fused_ordering(855) 00:16:10.418 fused_ordering(856) 00:16:10.418 fused_ordering(857) 00:16:10.418 fused_ordering(858) 00:16:10.418 fused_ordering(859) 00:16:10.418 fused_ordering(860) 00:16:10.418 fused_ordering(861) 00:16:10.418 fused_ordering(862) 00:16:10.418 fused_ordering(863) 00:16:10.418 fused_ordering(864) 00:16:10.418 fused_ordering(865) 00:16:10.418 fused_ordering(866) 00:16:10.418 fused_ordering(867) 00:16:10.418 fused_ordering(868) 00:16:10.418 fused_ordering(869) 00:16:10.418 fused_ordering(870) 00:16:10.418 fused_ordering(871) 00:16:10.418 fused_ordering(872) 00:16:10.418 fused_ordering(873) 00:16:10.418 fused_ordering(874) 00:16:10.418 fused_ordering(875) 00:16:10.418 fused_ordering(876) 00:16:10.418 fused_ordering(877) 00:16:10.418 fused_ordering(878) 00:16:10.418 fused_ordering(879) 00:16:10.418 fused_ordering(880) 00:16:10.418 fused_ordering(881) 00:16:10.418 fused_ordering(882) 00:16:10.418 fused_ordering(883) 00:16:10.418 fused_ordering(884) 00:16:10.418 fused_ordering(885) 00:16:10.418 fused_ordering(886) 00:16:10.418 fused_ordering(887) 00:16:10.418 fused_ordering(888) 00:16:10.418 fused_ordering(889) 00:16:10.418 fused_ordering(890) 00:16:10.418 fused_ordering(891) 00:16:10.418 fused_ordering(892) 00:16:10.418 fused_ordering(893) 00:16:10.418 fused_ordering(894) 00:16:10.418 fused_ordering(895) 00:16:10.418 fused_ordering(896) 00:16:10.418 fused_ordering(897) 00:16:10.418 fused_ordering(898) 00:16:10.418 fused_ordering(899) 00:16:10.418 fused_ordering(900) 00:16:10.418 fused_ordering(901) 00:16:10.418 fused_ordering(902) 00:16:10.418 fused_ordering(903) 00:16:10.418 fused_ordering(904) 00:16:10.418 fused_ordering(905) 00:16:10.418 fused_ordering(906) 00:16:10.418 fused_ordering(907) 00:16:10.418 fused_ordering(908) 00:16:10.418 fused_ordering(909) 00:16:10.418 fused_ordering(910) 00:16:10.418 fused_ordering(911) 00:16:10.418 fused_ordering(912) 00:16:10.418 fused_ordering(913) 00:16:10.418 fused_ordering(914) 00:16:10.419 fused_ordering(915) 00:16:10.419 fused_ordering(916) 00:16:10.419 fused_ordering(917) 00:16:10.419 fused_ordering(918) 00:16:10.419 fused_ordering(919) 00:16:10.419 fused_ordering(920) 00:16:10.419 fused_ordering(921) 00:16:10.419 fused_ordering(922) 00:16:10.419 fused_ordering(923) 00:16:10.419 fused_ordering(924) 00:16:10.419 fused_ordering(925) 00:16:10.419 fused_ordering(926) 00:16:10.419 fused_ordering(927) 00:16:10.419 fused_ordering(928) 00:16:10.419 fused_ordering(929) 00:16:10.419 fused_ordering(930) 00:16:10.419 fused_ordering(931) 00:16:10.419 fused_ordering(932) 00:16:10.419 fused_ordering(933) 00:16:10.419 fused_ordering(934) 00:16:10.419 fused_ordering(935) 00:16:10.419 fused_ordering(936) 00:16:10.419 fused_ordering(937) 00:16:10.419 fused_ordering(938) 00:16:10.419 fused_ordering(939) 00:16:10.419 fused_ordering(940) 00:16:10.419 fused_ordering(941) 00:16:10.419 fused_ordering(942) 00:16:10.419 fused_ordering(943) 00:16:10.419 fused_ordering(944) 00:16:10.419 fused_ordering(945) 00:16:10.419 fused_ordering(946) 00:16:10.419 fused_ordering(947) 00:16:10.419 fused_ordering(948) 00:16:10.419 fused_ordering(949) 00:16:10.419 fused_ordering(950) 00:16:10.419 fused_ordering(951) 00:16:10.419 fused_ordering(952) 00:16:10.419 fused_ordering(953) 00:16:10.419 fused_ordering(954) 00:16:10.419 fused_ordering(955) 00:16:10.419 fused_ordering(956) 00:16:10.419 fused_ordering(957) 00:16:10.419 fused_ordering(958) 00:16:10.419 fused_ordering(959) 00:16:10.419 fused_ordering(960) 00:16:10.419 fused_ordering(961) 00:16:10.419 fused_ordering(962) 00:16:10.419 fused_ordering(963) 00:16:10.419 fused_ordering(964) 00:16:10.419 fused_ordering(965) 00:16:10.419 fused_ordering(966) 00:16:10.419 fused_ordering(967) 00:16:10.419 fused_ordering(968) 00:16:10.419 fused_ordering(969) 00:16:10.419 fused_ordering(970) 00:16:10.419 fused_ordering(971) 00:16:10.419 fused_ordering(972) 00:16:10.419 fused_ordering(973) 00:16:10.419 fused_ordering(974) 00:16:10.419 fused_ordering(975) 00:16:10.419 fused_ordering(976) 00:16:10.419 fused_ordering(977) 00:16:10.419 fused_ordering(978) 00:16:10.419 fused_ordering(979) 00:16:10.419 fused_ordering(980) 00:16:10.419 fused_ordering(981) 00:16:10.419 fused_ordering(982) 00:16:10.419 fused_ordering(983) 00:16:10.419 fused_ordering(984) 00:16:10.419 fused_ordering(985) 00:16:10.419 fused_ordering(986) 00:16:10.419 fused_ordering(987) 00:16:10.419 fused_ordering(988) 00:16:10.419 fused_ordering(989) 00:16:10.419 fused_ordering(990) 00:16:10.419 fused_ordering(991) 00:16:10.419 fused_ordering(992) 00:16:10.419 fused_ordering(993) 00:16:10.419 fused_ordering(994) 00:16:10.419 fused_ordering(995) 00:16:10.419 fused_ordering(996) 00:16:10.419 fused_ordering(997) 00:16:10.419 fused_ordering(998) 00:16:10.419 fused_ordering(999) 00:16:10.419 fused_ordering(1000) 00:16:10.419 fused_ordering(1001) 00:16:10.419 fused_ordering(1002) 00:16:10.419 fused_ordering(1003) 00:16:10.419 fused_ordering(1004) 00:16:10.419 fused_ordering(1005) 00:16:10.419 fused_ordering(1006) 00:16:10.419 fused_ordering(1007) 00:16:10.419 fused_ordering(1008) 00:16:10.419 fused_ordering(1009) 00:16:10.419 fused_ordering(1010) 00:16:10.419 fused_ordering(1011) 00:16:10.419 fused_ordering(1012) 00:16:10.419 fused_ordering(1013) 00:16:10.419 fused_ordering(1014) 00:16:10.419 fused_ordering(1015) 00:16:10.419 fused_ordering(1016) 00:16:10.419 fused_ordering(1017) 00:16:10.419 fused_ordering(1018) 00:16:10.419 fused_ordering(1019) 00:16:10.419 fused_ordering(1020) 00:16:10.419 fused_ordering(1021) 00:16:10.419 fused_ordering(1022) 00:16:10.419 fused_ordering(1023) 00:16:10.419 19:07:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:10.419 19:07:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:10.419 19:07:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:10.419 19:07:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:16:10.419 19:07:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:10.419 19:07:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:10.419 19:07:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:16:10.419 19:07:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:10.419 19:07:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:10.419 rmmod nvme_rdma 00:16:10.419 rmmod nvme_fabrics 00:16:10.419 19:07:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:10.419 19:07:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:16:10.419 19:07:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:16:10.419 19:07:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 755619 ']' 00:16:10.419 19:07:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 755619 00:16:10.419 19:07:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 755619 ']' 00:16:10.419 19:07:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 755619 00:16:10.419 19:07:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:16:10.419 19:07:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:10.419 19:07:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 755619 00:16:10.419 19:07:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:10.419 19:07:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:10.419 19:07:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 755619' 00:16:10.419 killing process with pid 755619 00:16:10.419 19:07:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 755619 00:16:10.419 19:07:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 755619 00:16:10.679 19:07:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:10.679 19:07:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:10.679 00:16:10.679 real 0m8.057s 00:16:10.679 user 0m4.567s 00:16:10.679 sys 0m4.811s 00:16:10.679 19:07:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:10.679 19:07:02 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:10.679 ************************************ 00:16:10.679 END TEST nvmf_fused_ordering 00:16:10.679 ************************************ 00:16:10.679 19:07:03 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:16:10.679 19:07:03 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:10.679 19:07:03 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:10.679 19:07:03 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:10.679 ************************************ 00:16:10.679 START TEST nvmf_ns_masking 00:16:10.679 ************************************ 00:16:10.679 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:16:10.679 * Looking for test storage... 00:16:10.939 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=07c9ba65-57d8-45c3-91aa-07b9b158d77f 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=b2f36ca8-bbd4-4b15-9d0e-0e6be7bf4c30 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=0d189995-412f-4e51-a00a-1b9908d7f60f 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:16:10.939 19:07:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:17.505 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:17.505 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:16:17.505 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:17.505 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:17.505 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:17.505 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:17.505 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:17.505 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:16:17.505 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:17.505 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:16:17.505 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:16:17.505 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:16:17.505 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:16:17.505 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:16:17.505 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:16:17.505 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:17.505 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:17.505 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:17.505 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:17.505 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:17.505 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:17.505 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:17.505 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:17.505 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:17.505 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:17.505 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:17.505 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:17.505 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:17.505 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:17.505 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:17.505 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:17.505 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:17.505 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:17.505 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:16:17.506 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:16:17.506 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:16:17.506 Found net devices under 0000:af:00.0: mlx_0_0 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:16:17.506 Found net devices under 0000:af:00.1: mlx_0_1 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # rdma_device_init 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # uname 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:17.506 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:17.506 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:16:17.506 altname enp175s0f0np0 00:16:17.506 altname ens801f0np0 00:16:17.506 inet 192.168.100.8/24 scope global mlx_0_0 00:16:17.506 valid_lft forever preferred_lft forever 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:17.506 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:17.506 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:16:17.506 altname enp175s0f1np1 00:16:17.506 altname ens801f1np1 00:16:17.506 inet 192.168.100.9/24 scope global mlx_0_1 00:16:17.506 valid_lft forever preferred_lft forever 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:16:17.506 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:17.507 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:17.507 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:17.507 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:17.507 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:17.507 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:17.507 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:16:17.507 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:17.507 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:17.507 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:17.507 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:17.507 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:17.507 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:17.507 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:17.507 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:17.507 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:17.507 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:17.507 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:17.507 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:17.507 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:17.507 192.168.100.9' 00:16:17.507 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:17.507 192.168.100.9' 00:16:17.507 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@457 -- # head -n 1 00:16:17.507 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:17.507 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:17.507 192.168.100.9' 00:16:17.507 19:07:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@458 -- # tail -n +2 00:16:17.507 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@458 -- # head -n 1 00:16:17.507 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:17.507 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:17.507 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:17.507 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:17.507 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:17.507 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:17.507 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:16:17.507 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:17.507 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:17.507 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:17.507 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=759117 00:16:17.507 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 759117 00:16:17.507 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:17.507 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 759117 ']' 00:16:17.507 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.507 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:17.507 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.507 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:17.507 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:17.507 [2024-07-25 19:07:09.086176] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:17.507 [2024-07-25 19:07:09.086227] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.507 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.507 [2024-07-25 19:07:09.156231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.507 [2024-07-25 19:07:09.227044] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.507 [2024-07-25 19:07:09.227085] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.507 [2024-07-25 19:07:09.227091] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:17.507 [2024-07-25 19:07:09.227097] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:17.507 [2024-07-25 19:07:09.227102] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.507 [2024-07-25 19:07:09.227119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.507 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:17.507 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:16:17.507 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:17.507 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:17.507 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:17.507 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.507 19:07:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:17.766 [2024-07-25 19:07:10.141960] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18ffbb0/0x19040a0) succeed. 00:16:17.766 [2024-07-25 19:07:10.151245] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x19010b0/0x1945740) succeed. 00:16:17.766 19:07:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:16:17.766 19:07:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:16:17.766 19:07:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:18.025 Malloc1 00:16:18.025 19:07:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:18.283 Malloc2 00:16:18.283 19:07:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:18.542 19:07:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:18.800 19:07:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:18.800 [2024-07-25 19:07:11.201048] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:18.800 19:07:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:16:18.800 19:07:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0d189995-412f-4e51-a00a-1b9908d7f60f -a 192.168.100.8 -s 4420 -i 4 00:16:19.737 19:07:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:16:19.737 19:07:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:19.737 19:07:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:19.737 19:07:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:19.737 19:07:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:22.269 19:07:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:22.269 19:07:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:22.269 19:07:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:22.269 19:07:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:22.270 19:07:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:22.270 19:07:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:22.270 19:07:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:22.270 19:07:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:22.270 19:07:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:22.270 19:07:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:22.270 19:07:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:22.270 19:07:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:22.270 19:07:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:22.270 [ 0]:0x1 00:16:22.270 19:07:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:22.270 19:07:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:22.270 19:07:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c8f01aed9b5349e7aa8372d7dd85f5c9 00:16:22.270 19:07:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c8f01aed9b5349e7aa8372d7dd85f5c9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:22.270 19:07:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:22.270 19:07:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:22.270 19:07:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:22.270 19:07:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:22.270 [ 0]:0x1 00:16:22.270 19:07:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:22.270 19:07:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:22.270 19:07:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c8f01aed9b5349e7aa8372d7dd85f5c9 00:16:22.270 19:07:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c8f01aed9b5349e7aa8372d7dd85f5c9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:22.270 19:07:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:22.270 19:07:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:22.270 19:07:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:22.270 [ 1]:0x2 00:16:22.270 19:07:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:22.270 19:07:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:22.270 19:07:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e32f949dcb154653afb6464297c13674 00:16:22.270 19:07:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e32f949dcb154653afb6464297c13674 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:22.270 19:07:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:22.270 19:07:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:22.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.836 19:07:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:23.094 19:07:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:23.352 19:07:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:23.352 19:07:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0d189995-412f-4e51-a00a-1b9908d7f60f -a 192.168.100.8 -s 4420 -i 4 00:16:24.289 19:07:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:24.289 19:07:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:24.289 19:07:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:24.289 19:07:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:16:24.289 19:07:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:16:24.289 19:07:16 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:26.192 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:26.192 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:26.192 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:26.192 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:26.192 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:26.192 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:26.192 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:26.192 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:26.192 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:26.192 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:26.192 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:26.192 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:26.192 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:26.192 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:26.192 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:26.192 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:26.192 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:26.192 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:26.192 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:26.192 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:26.192 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:26.192 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:26.192 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:26.193 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:26.193 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:26.193 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:26.193 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:26.193 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:26.193 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:26.193 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:26.193 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:26.193 [ 0]:0x2 00:16:26.193 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:26.193 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:26.451 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e32f949dcb154653afb6464297c13674 00:16:26.451 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e32f949dcb154653afb6464297c13674 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:26.451 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:26.451 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:26.451 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:26.451 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:26.451 [ 0]:0x1 00:16:26.451 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:26.451 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:26.710 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c8f01aed9b5349e7aa8372d7dd85f5c9 00:16:26.710 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c8f01aed9b5349e7aa8372d7dd85f5c9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:26.710 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:26.710 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:26.710 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:26.710 [ 1]:0x2 00:16:26.710 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:26.710 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:26.710 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e32f949dcb154653afb6464297c13674 00:16:26.710 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e32f949dcb154653afb6464297c13674 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:26.710 19:07:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:26.970 19:07:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:26.970 19:07:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:26.970 19:07:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:26.970 19:07:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:26.970 19:07:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:26.970 19:07:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:26.970 19:07:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:26.970 19:07:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:26.970 19:07:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:26.970 19:07:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:26.970 19:07:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:26.970 19:07:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:26.970 19:07:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:26.970 19:07:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:26.970 19:07:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:26.970 19:07:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:26.970 19:07:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:26.970 19:07:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:26.970 19:07:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:26.970 19:07:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:26.970 19:07:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:26.970 [ 0]:0x2 00:16:26.970 19:07:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:26.970 19:07:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:26.970 19:07:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e32f949dcb154653afb6464297c13674 00:16:26.970 19:07:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e32f949dcb154653afb6464297c13674 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:26.970 19:07:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:26.970 19:07:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:27.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.538 19:07:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:27.796 19:07:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:27.796 19:07:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0d189995-412f-4e51-a00a-1b9908d7f60f -a 192.168.100.8 -s 4420 -i 4 00:16:28.732 19:07:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:28.732 19:07:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:28.732 19:07:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:28.732 19:07:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:28.732 19:07:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:28.732 19:07:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:30.637 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:30.637 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:30.637 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:30.637 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:30.637 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:30.637 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:30.896 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:30.896 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:30.896 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:30.896 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:30.896 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:16:30.896 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:30.896 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:30.896 [ 0]:0x1 00:16:30.896 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:30.896 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:30.896 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c8f01aed9b5349e7aa8372d7dd85f5c9 00:16:30.896 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c8f01aed9b5349e7aa8372d7dd85f5c9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:30.896 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:16:30.896 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:30.896 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:30.896 [ 1]:0x2 00:16:30.896 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:30.896 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:30.896 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e32f949dcb154653afb6464297c13674 00:16:30.896 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e32f949dcb154653afb6464297c13674 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:30.896 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:31.156 [ 0]:0x2 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e32f949dcb154653afb6464297c13674 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e32f949dcb154653afb6464297c13674 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:16:31.156 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:31.415 [2024-07-25 19:07:23.723302] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:31.415 request: 00:16:31.415 { 00:16:31.415 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:31.415 "nsid": 2, 00:16:31.415 "host": "nqn.2016-06.io.spdk:host1", 00:16:31.415 "method": "nvmf_ns_remove_host", 00:16:31.415 "req_id": 1 00:16:31.415 } 00:16:31.415 Got JSON-RPC error response 00:16:31.415 response: 00:16:31.415 { 00:16:31.415 "code": -32602, 00:16:31.415 "message": "Invalid parameters" 00:16:31.415 } 00:16:31.415 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:31.415 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:31.415 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:31.415 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:31.415 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:16:31.415 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:31.415 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:31.416 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:31.416 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.416 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:31.416 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.416 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:31.416 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:31.416 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:31.416 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:31.416 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:31.416 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:31.416 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:31.416 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:31.416 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:31.416 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:31.416 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:31.416 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:16:31.416 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:31.416 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:31.416 [ 0]:0x2 00:16:31.416 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:31.416 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:31.416 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e32f949dcb154653afb6464297c13674 00:16:31.416 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e32f949dcb154653afb6464297c13674 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:31.416 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:16:31.416 19:07:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:32.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.352 19:07:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=761842 00:16:32.352 19:07:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:16:32.352 19:07:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:16:32.352 19:07:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 761842 /var/tmp/host.sock 00:16:32.352 19:07:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 761842 ']' 00:16:32.352 19:07:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:32.353 19:07:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:32.353 19:07:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:32.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:32.353 19:07:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:32.353 19:07:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:32.353 [2024-07-25 19:07:24.634229] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:32.353 [2024-07-25 19:07:24.634279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid761842 ] 00:16:32.353 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.353 [2024-07-25 19:07:24.700513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.353 [2024-07-25 19:07:24.777928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.289 19:07:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:33.289 19:07:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:16:33.289 19:07:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:33.289 19:07:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:33.548 19:07:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 07c9ba65-57d8-45c3-91aa-07b9b158d77f 00:16:33.548 19:07:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:33.548 19:07:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 07C9BA6557D845C391AA07B9B158D77F -i 00:16:33.807 19:07:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid b2f36ca8-bbd4-4b15-9d0e-0e6be7bf4c30 00:16:33.807 19:07:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:33.807 19:07:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g B2F36CA8BBD44B159D0E0E6BE7BF4C30 -i 00:16:33.807 19:07:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:34.065 19:07:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:16:34.323 19:07:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:34.323 19:07:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:34.581 nvme0n1 00:16:34.582 19:07:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:34.582 19:07:26 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:34.840 nvme1n2 00:16:34.840 19:07:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:16:34.840 19:07:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:16:34.840 19:07:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:34.840 19:07:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:16:34.840 19:07:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:16:35.098 19:07:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:16:35.098 19:07:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:16:35.098 19:07:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:16:35.098 19:07:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:16:35.355 19:07:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 07c9ba65-57d8-45c3-91aa-07b9b158d77f == \0\7\c\9\b\a\6\5\-\5\7\d\8\-\4\5\c\3\-\9\1\a\a\-\0\7\b\9\b\1\5\8\d\7\7\f ]] 00:16:35.355 19:07:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:16:35.355 19:07:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:16:35.355 19:07:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:16:35.355 19:07:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ b2f36ca8-bbd4-4b15-9d0e-0e6be7bf4c30 == \b\2\f\3\6\c\a\8\-\b\b\d\4\-\4\b\1\5\-\9\d\0\e\-\0\e\6\b\e\7\b\f\4\c\3\0 ]] 00:16:35.355 19:07:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 761842 00:16:35.355 19:07:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 761842 ']' 00:16:35.355 19:07:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 761842 00:16:35.355 19:07:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:16:35.355 19:07:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:35.355 19:07:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 761842 00:16:35.614 19:07:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:35.614 19:07:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:35.614 19:07:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 761842' 00:16:35.614 killing process with pid 761842 00:16:35.614 19:07:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 761842 00:16:35.614 19:07:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 761842 00:16:35.871 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:36.129 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:16:36.129 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:16:36.129 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:36.129 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:16:36.129 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:36.129 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:36.129 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:16:36.129 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:36.129 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:36.129 rmmod nvme_rdma 00:16:36.129 rmmod nvme_fabrics 00:16:36.129 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:36.129 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:16:36.129 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:16:36.129 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 759117 ']' 00:16:36.129 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 759117 00:16:36.129 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 759117 ']' 00:16:36.129 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 759117 00:16:36.129 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:16:36.129 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:36.129 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 759117 00:16:36.129 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:36.129 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:36.129 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 759117' 00:16:36.129 killing process with pid 759117 00:16:36.129 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 759117 00:16:36.129 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 759117 00:16:36.388 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:36.388 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:36.388 00:16:36.388 real 0m25.664s 00:16:36.388 user 0m30.167s 00:16:36.388 sys 0m6.425s 00:16:36.388 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:36.388 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:36.388 ************************************ 00:16:36.388 END TEST nvmf_ns_masking 00:16:36.388 ************************************ 00:16:36.388 19:07:28 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:16:36.388 19:07:28 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:16:36.388 19:07:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:36.389 19:07:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:36.389 19:07:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:36.389 ************************************ 00:16:36.389 START TEST nvmf_nvme_cli 00:16:36.389 ************************************ 00:16:36.389 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:16:36.648 * Looking for test storage... 00:16:36.648 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:16:36.649 19:07:28 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:16:43.216 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:16:43.216 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:16:43.216 Found net devices under 0000:af:00.0: mlx_0_0 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:16:43.216 Found net devices under 0000:af:00.1: mlx_0_1 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # rdma_device_init 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # uname 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:43.216 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:43.217 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:43.217 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:16:43.217 altname enp175s0f0np0 00:16:43.217 altname ens801f0np0 00:16:43.217 inet 192.168.100.8/24 scope global mlx_0_0 00:16:43.217 valid_lft forever preferred_lft forever 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:43.217 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:43.217 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:16:43.217 altname enp175s0f1np1 00:16:43.217 altname ens801f1np1 00:16:43.217 inet 192.168.100.9/24 scope global mlx_0_1 00:16:43.217 valid_lft forever preferred_lft forever 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:43.217 192.168.100.9' 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:43.217 192.168.100.9' 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@457 -- # head -n 1 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:43.217 192.168.100.9' 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@458 -- # tail -n +2 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@458 -- # head -n 1 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=765674 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 765674 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 765674 ']' 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:43.217 19:07:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:43.217 [2024-07-25 19:07:34.825371] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:43.217 [2024-07-25 19:07:34.825415] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.217 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.217 [2024-07-25 19:07:34.893439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:43.217 [2024-07-25 19:07:34.971874] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.217 [2024-07-25 19:07:34.971916] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.217 [2024-07-25 19:07:34.971923] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:43.217 [2024-07-25 19:07:34.971929] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:43.217 [2024-07-25 19:07:34.971934] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.217 [2024-07-25 19:07:34.971993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.217 [2024-07-25 19:07:34.972100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:43.217 [2024-07-25 19:07:34.972206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.217 [2024-07-25 19:07:34.972207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:43.217 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:43.217 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:16:43.217 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:43.217 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:43.217 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:43.476 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:43.476 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:43.476 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.476 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:43.476 [2024-07-25 19:07:35.732284] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x184fdf0/0x18542e0) succeed. 00:16:43.476 [2024-07-25 19:07:35.741641] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1851430/0x1895980) succeed. 00:16:43.476 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.476 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:43.476 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.476 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:43.476 Malloc0 00:16:43.476 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.476 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:43.476 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.476 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:43.476 Malloc1 00:16:43.476 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.476 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:43.476 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.476 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:43.476 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.477 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:43.477 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.477 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:43.477 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.477 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:43.477 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.477 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:43.477 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.477 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:43.477 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.477 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:43.477 [2024-07-25 19:07:35.939046] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:43.477 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.477 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:16:43.477 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.477 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:43.735 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.735 19:07:35 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:16:43.735 00:16:43.735 Discovery Log Number of Records 2, Generation counter 2 00:16:43.735 =====Discovery Log Entry 0====== 00:16:43.735 trtype: rdma 00:16:43.735 adrfam: ipv4 00:16:43.735 subtype: current discovery subsystem 00:16:43.735 treq: not required 00:16:43.735 portid: 0 00:16:43.735 trsvcid: 4420 00:16:43.735 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:43.735 traddr: 192.168.100.8 00:16:43.735 eflags: explicit discovery connections, duplicate discovery information 00:16:43.735 rdma_prtype: not specified 00:16:43.735 rdma_qptype: connected 00:16:43.735 rdma_cms: rdma-cm 00:16:43.735 rdma_pkey: 0x0000 00:16:43.735 =====Discovery Log Entry 1====== 00:16:43.735 trtype: rdma 00:16:43.735 adrfam: ipv4 00:16:43.735 subtype: nvme subsystem 00:16:43.735 treq: not required 00:16:43.735 portid: 0 00:16:43.735 trsvcid: 4420 00:16:43.735 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:43.735 traddr: 192.168.100.8 00:16:43.735 eflags: none 00:16:43.735 rdma_prtype: not specified 00:16:43.735 rdma_qptype: connected 00:16:43.735 rdma_cms: rdma-cm 00:16:43.735 rdma_pkey: 0x0000 00:16:43.735 19:07:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:43.735 19:07:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:43.735 19:07:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:43.735 19:07:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:43.735 19:07:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:43.735 19:07:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:43.735 19:07:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:43.735 19:07:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:43.735 19:07:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:43.735 19:07:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:43.735 19:07:36 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:47.021 19:07:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:47.021 19:07:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:16:47.021 19:07:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:47.021 19:07:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:47.021 19:07:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:47.021 19:07:39 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:16:48.927 /dev/nvme0n2 ]] 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:48.927 19:07:41 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:51.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:51.462 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:51.462 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:16:51.462 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:51.462 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:51.462 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:51.462 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:51.462 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:16:51.462 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:51.462 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:51.462 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.462 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:51.462 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.462 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:51.462 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:51.463 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:51.463 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:16:51.463 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:51.463 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:51.463 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:16:51.463 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:51.463 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:51.463 rmmod nvme_rdma 00:16:51.463 rmmod nvme_fabrics 00:16:51.463 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:51.463 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:16:51.463 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:16:51.463 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 765674 ']' 00:16:51.463 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 765674 00:16:51.463 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 765674 ']' 00:16:51.463 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 765674 00:16:51.463 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:16:51.463 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:51.463 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 765674 00:16:51.463 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:51.463 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:51.463 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 765674' 00:16:51.463 killing process with pid 765674 00:16:51.463 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 765674 00:16:51.463 19:07:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 765674 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:51.798 00:16:51.798 real 0m15.254s 00:16:51.798 user 0m37.871s 00:16:51.798 sys 0m5.117s 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:51.798 ************************************ 00:16:51.798 END TEST nvmf_nvme_cli 00:16:51.798 ************************************ 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:51.798 ************************************ 00:16:51.798 START TEST nvmf_auth_target 00:16:51.798 ************************************ 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:16:51.798 * Looking for test storage... 00:16:51.798 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.798 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:51.799 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.799 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:51.799 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:51.799 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:51.799 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:51.799 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:51.799 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:51.799 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:51.799 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:51.799 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:52.076 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:52.076 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:52.076 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:52.076 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:16:52.076 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:52.076 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:52.076 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:52.076 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:16:52.076 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:52.077 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:52.077 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:52.077 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:52.077 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:52.077 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:52.077 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:52.077 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.077 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:52.077 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:52.077 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:52.077 19:07:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:16:57.581 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:16:57.581 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:16:57.581 Found net devices under 0000:af:00.0: mlx_0_0 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:16:57.581 Found net devices under 0000:af:00.1: mlx_0_1 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # rdma_device_init 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # uname 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:57.581 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:57.582 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:57.582 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:57.582 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:57.582 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:57.582 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:57.582 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:57.582 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:57.582 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:57.582 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:16:57.582 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:57.582 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:57.582 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:57.582 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:57.582 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:57.582 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:57.582 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:16:57.582 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:57.582 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:57.582 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:57.582 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:57.582 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:57.582 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:57.582 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:57.582 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:57.582 19:07:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:57.582 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:57.582 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:16:57.582 altname enp175s0f0np0 00:16:57.582 altname ens801f0np0 00:16:57.582 inet 192.168.100.8/24 scope global mlx_0_0 00:16:57.582 valid_lft forever preferred_lft forever 00:16:57.582 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:57.582 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:57.582 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:57.582 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:57.582 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:57.582 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:57.582 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:57.582 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:57.582 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:57.582 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:57.582 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:16:57.582 altname enp175s0f1np1 00:16:57.582 altname ens801f1np1 00:16:57.582 inet 192.168.100.9/24 scope global mlx_0_1 00:16:57.582 valid_lft forever preferred_lft forever 00:16:57.582 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:16:57.582 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:57.582 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:57.582 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:57.582 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:57.582 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:57.582 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:57.582 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:57.582 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:57.582 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:57.582 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:57.582 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:57.582 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:57.582 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:57.582 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:57.582 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:16:57.582 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:57.582 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:57.582 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:57.582 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:57.582 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:57.582 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:57.582 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:57.842 192.168.100.9' 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:57.842 192.168.100.9' 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # head -n 1 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:57.842 192.168.100.9' 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@458 -- # tail -n +2 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@458 -- # head -n 1 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=770465 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 770465 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 770465 ']' 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:57.842 19:07:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.779 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:58.779 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:58.779 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:58.779 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:58.779 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.779 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.779 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=770710 00:16:58.779 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:58.779 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:58.779 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:16:58.779 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:58.779 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:58.779 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:58.779 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:16:58.779 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:58.779 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:58.779 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9786057296328c422659bdc8594f7b010fac5fb4a0c4a899 00:16:58.779 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:58.779 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.H8T 00:16:58.779 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9786057296328c422659bdc8594f7b010fac5fb4a0c4a899 0 00:16:58.779 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9786057296328c422659bdc8594f7b010fac5fb4a0c4a899 0 00:16:58.779 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:58.779 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:58.779 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9786057296328c422659bdc8594f7b010fac5fb4a0c4a899 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.H8T 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.H8T 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.H8T 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5af08fb189062a4df1de679c428f4f8a659c07f42eb8fc30707917a8b42e65c3 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.c8R 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5af08fb189062a4df1de679c428f4f8a659c07f42eb8fc30707917a8b42e65c3 3 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5af08fb189062a4df1de679c428f4f8a659c07f42eb8fc30707917a8b42e65c3 3 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5af08fb189062a4df1de679c428f4f8a659c07f42eb8fc30707917a8b42e65c3 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.c8R 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.c8R 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.c8R 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=53404f40a5ce6c822c5db8663859c41a 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ItA 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 53404f40a5ce6c822c5db8663859c41a 1 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 53404f40a5ce6c822c5db8663859c41a 1 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=53404f40a5ce6c822c5db8663859c41a 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ItA 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ItA 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.ItA 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=240d13293a883f7c45cacff76147e08b48f03cfabc7f5e62 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.W5G 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 240d13293a883f7c45cacff76147e08b48f03cfabc7f5e62 2 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 240d13293a883f7c45cacff76147e08b48f03cfabc7f5e62 2 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=240d13293a883f7c45cacff76147e08b48f03cfabc7f5e62 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:58.780 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:59.039 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.W5G 00:16:59.039 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.W5G 00:16:59.039 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.W5G 00:16:59.039 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:16:59.039 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:59.039 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:59.039 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:59.039 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:59.039 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:59.039 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:59.039 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6a14edd633e625422a4edcf440e555eca784cf893339483a 00:16:59.039 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:59.039 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.gwM 00:16:59.039 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6a14edd633e625422a4edcf440e555eca784cf893339483a 2 00:16:59.039 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6a14edd633e625422a4edcf440e555eca784cf893339483a 2 00:16:59.039 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:59.039 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:59.039 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6a14edd633e625422a4edcf440e555eca784cf893339483a 00:16:59.039 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:59.039 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:59.039 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.gwM 00:16:59.039 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.gwM 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.gwM 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=72d44dff9b4eb402fd04c691cf482517 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ly9 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 72d44dff9b4eb402fd04c691cf482517 1 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 72d44dff9b4eb402fd04c691cf482517 1 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=72d44dff9b4eb402fd04c691cf482517 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ly9 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ly9 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.ly9 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a29ee048fe9007a40033077810b3a661af4cd1155298aa2c936ffa461e675d94 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.tJ7 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a29ee048fe9007a40033077810b3a661af4cd1155298aa2c936ffa461e675d94 3 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a29ee048fe9007a40033077810b3a661af4cd1155298aa2c936ffa461e675d94 3 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a29ee048fe9007a40033077810b3a661af4cd1155298aa2c936ffa461e675d94 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.tJ7 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.tJ7 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.tJ7 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 770465 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 770465 ']' 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:59.040 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.299 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:59.299 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:59.299 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 770710 /var/tmp/host.sock 00:16:59.299 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 770710 ']' 00:16:59.299 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:59.299 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:59.299 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:59.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:59.299 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:59.299 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.558 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:59.558 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:59.558 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:16:59.558 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.558 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.558 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.558 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:59.558 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.H8T 00:16:59.558 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.558 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.558 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.558 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.H8T 00:16:59.558 19:07:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.H8T 00:16:59.816 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.c8R ]] 00:16:59.816 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.c8R 00:16:59.816 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.816 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.816 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.816 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.c8R 00:16:59.816 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.c8R 00:17:00.074 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:00.074 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.ItA 00:17:00.074 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.074 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.074 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.074 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.ItA 00:17:00.074 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.ItA 00:17:00.333 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.W5G ]] 00:17:00.333 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.W5G 00:17:00.333 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.333 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.333 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.333 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.W5G 00:17:00.333 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.W5G 00:17:00.333 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:00.333 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.gwM 00:17:00.333 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.333 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.333 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.333 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.gwM 00:17:00.333 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.gwM 00:17:00.592 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.ly9 ]] 00:17:00.592 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ly9 00:17:00.592 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.592 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.592 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.592 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ly9 00:17:00.592 19:07:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ly9 00:17:00.851 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:00.851 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.tJ7 00:17:00.851 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.851 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.851 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.851 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.tJ7 00:17:00.851 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.tJ7 00:17:01.123 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:01.123 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:01.123 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:01.123 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:01.123 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:01.123 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:01.123 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:17:01.123 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.123 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:01.123 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:01.123 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:01.123 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.123 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.123 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.123 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.123 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.123 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.123 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.386 00:17:01.386 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:01.386 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:01.386 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.644 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.644 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.644 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.644 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.644 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.645 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:01.645 { 00:17:01.645 "cntlid": 1, 00:17:01.645 "qid": 0, 00:17:01.645 "state": "enabled", 00:17:01.645 "thread": "nvmf_tgt_poll_group_000", 00:17:01.645 "listen_address": { 00:17:01.645 "trtype": "RDMA", 00:17:01.645 "adrfam": "IPv4", 00:17:01.645 "traddr": "192.168.100.8", 00:17:01.645 "trsvcid": "4420" 00:17:01.645 }, 00:17:01.645 "peer_address": { 00:17:01.645 "trtype": "RDMA", 00:17:01.645 "adrfam": "IPv4", 00:17:01.645 "traddr": "192.168.100.8", 00:17:01.645 "trsvcid": "43201" 00:17:01.645 }, 00:17:01.645 "auth": { 00:17:01.645 "state": "completed", 00:17:01.645 "digest": "sha256", 00:17:01.645 "dhgroup": "null" 00:17:01.645 } 00:17:01.645 } 00:17:01.645 ]' 00:17:01.645 19:07:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:01.645 19:07:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:01.645 19:07:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:01.645 19:07:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:01.645 19:07:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:01.903 19:07:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.903 19:07:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.903 19:07:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.903 19:07:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTc4NjA1NzI5NjMyOGM0MjI2NTliZGM4NTk0ZjdiMDEwZmFjNWZiNGEwYzRhODk5Vo/hKA==: --dhchap-ctrl-secret DHHC-1:03:NWFmMDhmYjE4OTA2MmE0ZGYxZGU2NzljNDI4ZjRmOGE2NTljMDdmNDJlYjhmYzMwNzA3OTE3YThiNDJlNjVjM8S0ot4=: 00:17:02.839 19:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.098 19:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:03.098 19:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.098 19:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.098 19:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.098 19:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:03.098 19:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:03.098 19:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:03.098 19:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:17:03.098 19:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:03.098 19:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:03.098 19:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:03.099 19:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:03.099 19:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.099 19:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.099 19:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.099 19:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.099 19:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.099 19:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.099 19:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.357 00:17:03.357 19:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:03.357 19:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:03.357 19:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.616 19:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.616 19:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.616 19:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.616 19:07:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.616 19:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.616 19:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:03.616 { 00:17:03.616 "cntlid": 3, 00:17:03.616 "qid": 0, 00:17:03.616 "state": "enabled", 00:17:03.616 "thread": "nvmf_tgt_poll_group_000", 00:17:03.616 "listen_address": { 00:17:03.616 "trtype": "RDMA", 00:17:03.616 "adrfam": "IPv4", 00:17:03.616 "traddr": "192.168.100.8", 00:17:03.616 "trsvcid": "4420" 00:17:03.616 }, 00:17:03.616 "peer_address": { 00:17:03.616 "trtype": "RDMA", 00:17:03.616 "adrfam": "IPv4", 00:17:03.616 "traddr": "192.168.100.8", 00:17:03.616 "trsvcid": "37825" 00:17:03.616 }, 00:17:03.616 "auth": { 00:17:03.616 "state": "completed", 00:17:03.616 "digest": "sha256", 00:17:03.616 "dhgroup": "null" 00:17:03.616 } 00:17:03.616 } 00:17:03.616 ]' 00:17:03.616 19:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:03.616 19:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:03.616 19:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:03.875 19:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:03.875 19:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:03.875 19:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.875 19:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.875 19:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.133 19:07:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTM0MDRmNDBhNWNlNmM4MjJjNWRiODY2Mzg1OWM0MWHhFORF: --dhchap-ctrl-secret DHHC-1:02:MjQwZDEzMjkzYTg4M2Y3YzQ1Y2FjZmY3NjE0N2UwOGI0OGYwM2NmYWJjN2Y1ZTYyEjfwJQ==: 00:17:04.702 19:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.960 19:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:04.960 19:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.960 19:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.960 19:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.960 19:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:04.960 19:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:04.960 19:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:05.220 19:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:17:05.220 19:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:05.220 19:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:05.220 19:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:05.220 19:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:05.220 19:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.220 19:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.220 19:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.220 19:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.220 19:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.220 19:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.220 19:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.478 00:17:05.478 19:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:05.478 19:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:05.478 19:07:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.737 19:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.737 19:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.737 19:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.737 19:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.737 19:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.737 19:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:05.737 { 00:17:05.737 "cntlid": 5, 00:17:05.737 "qid": 0, 00:17:05.737 "state": "enabled", 00:17:05.737 "thread": "nvmf_tgt_poll_group_000", 00:17:05.737 "listen_address": { 00:17:05.737 "trtype": "RDMA", 00:17:05.737 "adrfam": "IPv4", 00:17:05.737 "traddr": "192.168.100.8", 00:17:05.737 "trsvcid": "4420" 00:17:05.737 }, 00:17:05.737 "peer_address": { 00:17:05.737 "trtype": "RDMA", 00:17:05.737 "adrfam": "IPv4", 00:17:05.737 "traddr": "192.168.100.8", 00:17:05.737 "trsvcid": "60220" 00:17:05.737 }, 00:17:05.737 "auth": { 00:17:05.737 "state": "completed", 00:17:05.737 "digest": "sha256", 00:17:05.737 "dhgroup": "null" 00:17:05.737 } 00:17:05.737 } 00:17:05.737 ]' 00:17:05.737 19:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:05.737 19:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:05.737 19:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:05.737 19:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:05.737 19:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:05.737 19:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.737 19:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.737 19:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.996 19:07:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmExNGVkZDYzM2U2MjU0MjJhNGVkY2Y0NDBlNTU1ZWNhNzg0Y2Y4OTMzMzk0ODNhHXNO5Q==: --dhchap-ctrl-secret DHHC-1:01:NzJkNDRkZmY5YjRlYjQwMmZkMDRjNjkxY2Y0ODI1MTfV3eFz: 00:17:06.932 19:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.932 19:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:06.932 19:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.932 19:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.932 19:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.932 19:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.932 19:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:06.932 19:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:07.191 19:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:17:07.191 19:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:07.191 19:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:07.191 19:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:07.191 19:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:07.191 19:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.191 19:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:17:07.191 19:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.191 19:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.191 19:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.191 19:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:07.191 19:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:07.448 00:17:07.448 19:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:07.448 19:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.448 19:07:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.707 19:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.707 19:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.707 19:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.707 19:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.707 19:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.707 19:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:07.707 { 00:17:07.707 "cntlid": 7, 00:17:07.707 "qid": 0, 00:17:07.707 "state": "enabled", 00:17:07.707 "thread": "nvmf_tgt_poll_group_000", 00:17:07.707 "listen_address": { 00:17:07.707 "trtype": "RDMA", 00:17:07.707 "adrfam": "IPv4", 00:17:07.707 "traddr": "192.168.100.8", 00:17:07.707 "trsvcid": "4420" 00:17:07.707 }, 00:17:07.707 "peer_address": { 00:17:07.707 "trtype": "RDMA", 00:17:07.707 "adrfam": "IPv4", 00:17:07.707 "traddr": "192.168.100.8", 00:17:07.707 "trsvcid": "47589" 00:17:07.707 }, 00:17:07.707 "auth": { 00:17:07.707 "state": "completed", 00:17:07.707 "digest": "sha256", 00:17:07.707 "dhgroup": "null" 00:17:07.707 } 00:17:07.707 } 00:17:07.707 ]' 00:17:07.707 19:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:07.707 19:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:07.707 19:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.707 19:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:07.707 19:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.707 19:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.707 19:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.707 19:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.966 19:08:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTI5ZWUwNDhmZTkwMDdhNDAwMzMwNzc4MTBiM2E2NjFhZjRjZDExNTUyOThhYTJjOTM2ZmZhNDYxZTY3NWQ5NLD7Guo=: 00:17:08.903 19:08:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.903 19:08:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:08.903 19:08:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.903 19:08:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.162 19:08:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.162 19:08:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:09.162 19:08:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.162 19:08:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:09.162 19:08:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:09.162 19:08:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:17:09.162 19:08:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:09.162 19:08:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:09.162 19:08:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:09.162 19:08:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:09.162 19:08:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.162 19:08:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.162 19:08:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.162 19:08:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.162 19:08:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.162 19:08:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.162 19:08:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.421 00:17:09.421 19:08:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:09.421 19:08:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:09.421 19:08:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.680 19:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.680 19:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.680 19:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.680 19:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.680 19:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.680 19:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.680 { 00:17:09.680 "cntlid": 9, 00:17:09.680 "qid": 0, 00:17:09.680 "state": "enabled", 00:17:09.680 "thread": "nvmf_tgt_poll_group_000", 00:17:09.680 "listen_address": { 00:17:09.680 "trtype": "RDMA", 00:17:09.680 "adrfam": "IPv4", 00:17:09.680 "traddr": "192.168.100.8", 00:17:09.680 "trsvcid": "4420" 00:17:09.680 }, 00:17:09.680 "peer_address": { 00:17:09.680 "trtype": "RDMA", 00:17:09.680 "adrfam": "IPv4", 00:17:09.680 "traddr": "192.168.100.8", 00:17:09.680 "trsvcid": "55800" 00:17:09.680 }, 00:17:09.680 "auth": { 00:17:09.680 "state": "completed", 00:17:09.680 "digest": "sha256", 00:17:09.680 "dhgroup": "ffdhe2048" 00:17:09.680 } 00:17:09.680 } 00:17:09.680 ]' 00:17:09.680 19:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.680 19:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:09.680 19:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.680 19:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:09.680 19:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.939 19:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.939 19:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.939 19:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.939 19:08:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTc4NjA1NzI5NjMyOGM0MjI2NTliZGM4NTk0ZjdiMDEwZmFjNWZiNGEwYzRhODk5Vo/hKA==: --dhchap-ctrl-secret DHHC-1:03:NWFmMDhmYjE4OTA2MmE0ZGYxZGU2NzljNDI4ZjRmOGE2NTljMDdmNDJlYjhmYzMwNzA3OTE3YThiNDJlNjVjM8S0ot4=: 00:17:10.876 19:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.134 19:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:11.134 19:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.134 19:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.134 19:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.134 19:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:11.134 19:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:11.134 19:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:11.134 19:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:17:11.134 19:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:11.134 19:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:11.134 19:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:11.134 19:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:11.134 19:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.134 19:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.134 19:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.134 19:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.134 19:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.134 19:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.134 19:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.393 00:17:11.393 19:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.393 19:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.393 19:08:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.652 19:08:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.652 19:08:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.652 19:08:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.652 19:08:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.652 19:08:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.652 19:08:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:11.652 { 00:17:11.652 "cntlid": 11, 00:17:11.652 "qid": 0, 00:17:11.652 "state": "enabled", 00:17:11.652 "thread": "nvmf_tgt_poll_group_000", 00:17:11.652 "listen_address": { 00:17:11.652 "trtype": "RDMA", 00:17:11.652 "adrfam": "IPv4", 00:17:11.652 "traddr": "192.168.100.8", 00:17:11.652 "trsvcid": "4420" 00:17:11.652 }, 00:17:11.652 "peer_address": { 00:17:11.652 "trtype": "RDMA", 00:17:11.652 "adrfam": "IPv4", 00:17:11.652 "traddr": "192.168.100.8", 00:17:11.652 "trsvcid": "34266" 00:17:11.652 }, 00:17:11.652 "auth": { 00:17:11.652 "state": "completed", 00:17:11.652 "digest": "sha256", 00:17:11.652 "dhgroup": "ffdhe2048" 00:17:11.652 } 00:17:11.652 } 00:17:11.652 ]' 00:17:11.652 19:08:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:11.652 19:08:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:11.652 19:08:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.911 19:08:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:11.911 19:08:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:11.911 19:08:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.911 19:08:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.911 19:08:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.170 19:08:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTM0MDRmNDBhNWNlNmM4MjJjNWRiODY2Mzg1OWM0MWHhFORF: --dhchap-ctrl-secret DHHC-1:02:MjQwZDEzMjkzYTg4M2Y3YzQ1Y2FjZmY3NjE0N2UwOGI0OGYwM2NmYWJjN2Y1ZTYyEjfwJQ==: 00:17:12.736 19:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.994 19:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:12.994 19:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.994 19:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.994 19:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.994 19:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.994 19:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:12.994 19:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:13.252 19:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:17:13.252 19:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.252 19:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:13.252 19:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:13.252 19:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:13.252 19:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.252 19:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.252 19:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.252 19:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.252 19:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.252 19:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.252 19:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.510 00:17:13.510 19:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.510 19:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.510 19:08:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.768 19:08:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.768 19:08:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.768 19:08:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.768 19:08:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.768 19:08:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.768 19:08:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.768 { 00:17:13.768 "cntlid": 13, 00:17:13.768 "qid": 0, 00:17:13.768 "state": "enabled", 00:17:13.768 "thread": "nvmf_tgt_poll_group_000", 00:17:13.768 "listen_address": { 00:17:13.768 "trtype": "RDMA", 00:17:13.768 "adrfam": "IPv4", 00:17:13.768 "traddr": "192.168.100.8", 00:17:13.768 "trsvcid": "4420" 00:17:13.768 }, 00:17:13.768 "peer_address": { 00:17:13.768 "trtype": "RDMA", 00:17:13.768 "adrfam": "IPv4", 00:17:13.768 "traddr": "192.168.100.8", 00:17:13.768 "trsvcid": "36870" 00:17:13.768 }, 00:17:13.768 "auth": { 00:17:13.768 "state": "completed", 00:17:13.768 "digest": "sha256", 00:17:13.768 "dhgroup": "ffdhe2048" 00:17:13.768 } 00:17:13.768 } 00:17:13.768 ]' 00:17:13.768 19:08:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:13.768 19:08:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:13.768 19:08:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:13.768 19:08:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:13.768 19:08:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:13.768 19:08:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.768 19:08:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.768 19:08:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.027 19:08:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmExNGVkZDYzM2U2MjU0MjJhNGVkY2Y0NDBlNTU1ZWNhNzg0Y2Y4OTMzMzk0ODNhHXNO5Q==: --dhchap-ctrl-secret DHHC-1:01:NzJkNDRkZmY5YjRlYjQwMmZkMDRjNjkxY2Y0ODI1MTfV3eFz: 00:17:14.961 19:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.961 19:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:14.961 19:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.962 19:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.962 19:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.962 19:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:14.962 19:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:14.962 19:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:15.219 19:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:17:15.219 19:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.219 19:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:15.219 19:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:15.219 19:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:15.219 19:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.219 19:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:17:15.219 19:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.219 19:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.219 19:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.220 19:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:15.220 19:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:15.477 00:17:15.477 19:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:15.477 19:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:15.477 19:08:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.735 19:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.735 19:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.735 19:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.735 19:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.735 19:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.735 19:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:15.735 { 00:17:15.735 "cntlid": 15, 00:17:15.735 "qid": 0, 00:17:15.735 "state": "enabled", 00:17:15.735 "thread": "nvmf_tgt_poll_group_000", 00:17:15.735 "listen_address": { 00:17:15.735 "trtype": "RDMA", 00:17:15.735 "adrfam": "IPv4", 00:17:15.735 "traddr": "192.168.100.8", 00:17:15.735 "trsvcid": "4420" 00:17:15.735 }, 00:17:15.735 "peer_address": { 00:17:15.735 "trtype": "RDMA", 00:17:15.735 "adrfam": "IPv4", 00:17:15.735 "traddr": "192.168.100.8", 00:17:15.735 "trsvcid": "42531" 00:17:15.735 }, 00:17:15.736 "auth": { 00:17:15.736 "state": "completed", 00:17:15.736 "digest": "sha256", 00:17:15.736 "dhgroup": "ffdhe2048" 00:17:15.736 } 00:17:15.736 } 00:17:15.736 ]' 00:17:15.736 19:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:15.736 19:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:15.736 19:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:15.736 19:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:15.736 19:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:15.736 19:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.736 19:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.736 19:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.993 19:08:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTI5ZWUwNDhmZTkwMDdhNDAwMzMwNzc4MTBiM2E2NjFhZjRjZDExNTUyOThhYTJjOTM2ZmZhNDYxZTY3NWQ5NLD7Guo=: 00:17:16.926 19:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.926 19:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:16.926 19:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.926 19:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.184 19:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.184 19:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:17.184 19:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.184 19:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:17.184 19:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:17.184 19:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:17:17.184 19:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.184 19:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:17.184 19:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:17.184 19:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:17.184 19:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.184 19:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.184 19:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.184 19:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.184 19:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.184 19:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.184 19:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.442 00:17:17.442 19:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:17.442 19:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:17.442 19:08:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.699 19:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.699 19:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.699 19:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.699 19:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.699 19:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.699 19:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:17.699 { 00:17:17.699 "cntlid": 17, 00:17:17.699 "qid": 0, 00:17:17.699 "state": "enabled", 00:17:17.699 "thread": "nvmf_tgt_poll_group_000", 00:17:17.699 "listen_address": { 00:17:17.699 "trtype": "RDMA", 00:17:17.699 "adrfam": "IPv4", 00:17:17.699 "traddr": "192.168.100.8", 00:17:17.699 "trsvcid": "4420" 00:17:17.699 }, 00:17:17.699 "peer_address": { 00:17:17.699 "trtype": "RDMA", 00:17:17.699 "adrfam": "IPv4", 00:17:17.699 "traddr": "192.168.100.8", 00:17:17.699 "trsvcid": "49460" 00:17:17.699 }, 00:17:17.699 "auth": { 00:17:17.699 "state": "completed", 00:17:17.699 "digest": "sha256", 00:17:17.699 "dhgroup": "ffdhe3072" 00:17:17.699 } 00:17:17.699 } 00:17:17.699 ]' 00:17:17.699 19:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:17.699 19:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:17.699 19:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:17.699 19:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:17.699 19:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:17.957 19:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.957 19:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.957 19:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.957 19:08:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTc4NjA1NzI5NjMyOGM0MjI2NTliZGM4NTk0ZjdiMDEwZmFjNWZiNGEwYzRhODk5Vo/hKA==: --dhchap-ctrl-secret DHHC-1:03:NWFmMDhmYjE4OTA2MmE0ZGYxZGU2NzljNDI4ZjRmOGE2NTljMDdmNDJlYjhmYzMwNzA3OTE3YThiNDJlNjVjM8S0ot4=: 00:17:18.891 19:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.149 19:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:19.149 19:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.149 19:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.149 19:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.149 19:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.149 19:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:19.149 19:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:19.149 19:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:17:19.149 19:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.149 19:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:19.149 19:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:19.149 19:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:19.149 19:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.149 19:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.149 19:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.149 19:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.149 19:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.149 19:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.149 19:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.407 00:17:19.407 19:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:19.407 19:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:19.407 19:08:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.665 19:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.665 19:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.665 19:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.666 19:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.666 19:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.666 19:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:19.666 { 00:17:19.666 "cntlid": 19, 00:17:19.666 "qid": 0, 00:17:19.666 "state": "enabled", 00:17:19.666 "thread": "nvmf_tgt_poll_group_000", 00:17:19.666 "listen_address": { 00:17:19.666 "trtype": "RDMA", 00:17:19.666 "adrfam": "IPv4", 00:17:19.666 "traddr": "192.168.100.8", 00:17:19.666 "trsvcid": "4420" 00:17:19.666 }, 00:17:19.666 "peer_address": { 00:17:19.666 "trtype": "RDMA", 00:17:19.666 "adrfam": "IPv4", 00:17:19.666 "traddr": "192.168.100.8", 00:17:19.666 "trsvcid": "34690" 00:17:19.666 }, 00:17:19.666 "auth": { 00:17:19.666 "state": "completed", 00:17:19.666 "digest": "sha256", 00:17:19.666 "dhgroup": "ffdhe3072" 00:17:19.666 } 00:17:19.666 } 00:17:19.666 ]' 00:17:19.666 19:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.666 19:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:19.666 19:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.924 19:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:19.924 19:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.924 19:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.924 19:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.924 19:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.182 19:08:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTM0MDRmNDBhNWNlNmM4MjJjNWRiODY2Mzg1OWM0MWHhFORF: --dhchap-ctrl-secret DHHC-1:02:MjQwZDEzMjkzYTg4M2Y3YzQ1Y2FjZmY3NjE0N2UwOGI0OGYwM2NmYWJjN2Y1ZTYyEjfwJQ==: 00:17:20.749 19:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.008 19:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:21.008 19:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.008 19:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.008 19:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.008 19:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.008 19:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:21.008 19:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:21.266 19:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:17:21.266 19:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:21.266 19:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:21.266 19:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:21.266 19:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:21.266 19:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.266 19:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.266 19:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.266 19:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.266 19:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.266 19:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.267 19:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.524 00:17:21.524 19:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.524 19:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.524 19:08:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.782 19:08:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.782 19:08:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.782 19:08:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.782 19:08:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.782 19:08:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.782 19:08:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:21.782 { 00:17:21.782 "cntlid": 21, 00:17:21.782 "qid": 0, 00:17:21.782 "state": "enabled", 00:17:21.782 "thread": "nvmf_tgt_poll_group_000", 00:17:21.782 "listen_address": { 00:17:21.782 "trtype": "RDMA", 00:17:21.782 "adrfam": "IPv4", 00:17:21.782 "traddr": "192.168.100.8", 00:17:21.782 "trsvcid": "4420" 00:17:21.782 }, 00:17:21.782 "peer_address": { 00:17:21.782 "trtype": "RDMA", 00:17:21.782 "adrfam": "IPv4", 00:17:21.782 "traddr": "192.168.100.8", 00:17:21.782 "trsvcid": "47429" 00:17:21.782 }, 00:17:21.782 "auth": { 00:17:21.782 "state": "completed", 00:17:21.782 "digest": "sha256", 00:17:21.782 "dhgroup": "ffdhe3072" 00:17:21.782 } 00:17:21.782 } 00:17:21.782 ]' 00:17:21.782 19:08:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:21.782 19:08:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:21.782 19:08:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:21.782 19:08:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:21.782 19:08:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:21.782 19:08:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.782 19:08:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.782 19:08:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.040 19:08:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmExNGVkZDYzM2U2MjU0MjJhNGVkY2Y0NDBlNTU1ZWNhNzg0Y2Y4OTMzMzk0ODNhHXNO5Q==: --dhchap-ctrl-secret DHHC-1:01:NzJkNDRkZmY5YjRlYjQwMmZkMDRjNjkxY2Y0ODI1MTfV3eFz: 00:17:22.977 19:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.977 19:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:22.977 19:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.977 19:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.977 19:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.977 19:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:22.977 19:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:22.977 19:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:23.236 19:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:17:23.236 19:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.236 19:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:23.236 19:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:23.236 19:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:23.236 19:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.236 19:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:17:23.236 19:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.236 19:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.236 19:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.236 19:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.236 19:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.495 00:17:23.495 19:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.495 19:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:23.495 19:08:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.753 19:08:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.753 19:08:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.753 19:08:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.753 19:08:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.753 19:08:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.753 19:08:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.753 { 00:17:23.753 "cntlid": 23, 00:17:23.753 "qid": 0, 00:17:23.753 "state": "enabled", 00:17:23.753 "thread": "nvmf_tgt_poll_group_000", 00:17:23.753 "listen_address": { 00:17:23.753 "trtype": "RDMA", 00:17:23.753 "adrfam": "IPv4", 00:17:23.753 "traddr": "192.168.100.8", 00:17:23.753 "trsvcid": "4420" 00:17:23.753 }, 00:17:23.753 "peer_address": { 00:17:23.753 "trtype": "RDMA", 00:17:23.753 "adrfam": "IPv4", 00:17:23.753 "traddr": "192.168.100.8", 00:17:23.753 "trsvcid": "38217" 00:17:23.753 }, 00:17:23.753 "auth": { 00:17:23.753 "state": "completed", 00:17:23.753 "digest": "sha256", 00:17:23.753 "dhgroup": "ffdhe3072" 00:17:23.753 } 00:17:23.753 } 00:17:23.753 ]' 00:17:23.753 19:08:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.753 19:08:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:23.753 19:08:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:23.753 19:08:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:23.753 19:08:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:23.753 19:08:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.753 19:08:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.753 19:08:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.012 19:08:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTI5ZWUwNDhmZTkwMDdhNDAwMzMwNzc4MTBiM2E2NjFhZjRjZDExNTUyOThhYTJjOTM2ZmZhNDYxZTY3NWQ5NLD7Guo=: 00:17:24.948 19:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.207 19:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:25.207 19:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.207 19:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.207 19:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.207 19:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.207 19:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.207 19:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:25.207 19:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:25.207 19:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:17:25.207 19:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.207 19:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:25.207 19:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:25.207 19:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:25.207 19:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.207 19:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.207 19:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.207 19:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.207 19:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.207 19:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.207 19:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.465 00:17:25.724 19:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.724 19:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.724 19:08:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.724 19:08:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.724 19:08:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.724 19:08:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.724 19:08:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.724 19:08:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.724 19:08:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.724 { 00:17:25.724 "cntlid": 25, 00:17:25.724 "qid": 0, 00:17:25.724 "state": "enabled", 00:17:25.724 "thread": "nvmf_tgt_poll_group_000", 00:17:25.724 "listen_address": { 00:17:25.724 "trtype": "RDMA", 00:17:25.724 "adrfam": "IPv4", 00:17:25.724 "traddr": "192.168.100.8", 00:17:25.724 "trsvcid": "4420" 00:17:25.724 }, 00:17:25.724 "peer_address": { 00:17:25.724 "trtype": "RDMA", 00:17:25.724 "adrfam": "IPv4", 00:17:25.724 "traddr": "192.168.100.8", 00:17:25.724 "trsvcid": "42978" 00:17:25.724 }, 00:17:25.724 "auth": { 00:17:25.724 "state": "completed", 00:17:25.724 "digest": "sha256", 00:17:25.724 "dhgroup": "ffdhe4096" 00:17:25.724 } 00:17:25.724 } 00:17:25.724 ]' 00:17:25.725 19:08:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.725 19:08:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:25.725 19:08:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.983 19:08:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:25.983 19:08:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.983 19:08:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.983 19:08:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.983 19:08:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.242 19:08:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTc4NjA1NzI5NjMyOGM0MjI2NTliZGM4NTk0ZjdiMDEwZmFjNWZiNGEwYzRhODk5Vo/hKA==: --dhchap-ctrl-secret DHHC-1:03:NWFmMDhmYjE4OTA2MmE0ZGYxZGU2NzljNDI4ZjRmOGE2NTljMDdmNDJlYjhmYzMwNzA3OTE3YThiNDJlNjVjM8S0ot4=: 00:17:26.809 19:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.068 19:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:27.068 19:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.068 19:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.068 19:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.068 19:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:27.068 19:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:27.068 19:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:27.327 19:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:17:27.327 19:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:27.327 19:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:27.327 19:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:27.327 19:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:27.327 19:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.327 19:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.327 19:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.327 19:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.327 19:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.327 19:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.327 19:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.586 00:17:27.586 19:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:27.586 19:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:27.586 19:08:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.845 19:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.845 19:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.845 19:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.845 19:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.845 19:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.845 19:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:27.845 { 00:17:27.845 "cntlid": 27, 00:17:27.845 "qid": 0, 00:17:27.845 "state": "enabled", 00:17:27.845 "thread": "nvmf_tgt_poll_group_000", 00:17:27.845 "listen_address": { 00:17:27.845 "trtype": "RDMA", 00:17:27.845 "adrfam": "IPv4", 00:17:27.845 "traddr": "192.168.100.8", 00:17:27.845 "trsvcid": "4420" 00:17:27.845 }, 00:17:27.845 "peer_address": { 00:17:27.845 "trtype": "RDMA", 00:17:27.845 "adrfam": "IPv4", 00:17:27.845 "traddr": "192.168.100.8", 00:17:27.845 "trsvcid": "43619" 00:17:27.845 }, 00:17:27.845 "auth": { 00:17:27.845 "state": "completed", 00:17:27.845 "digest": "sha256", 00:17:27.845 "dhgroup": "ffdhe4096" 00:17:27.845 } 00:17:27.845 } 00:17:27.845 ]' 00:17:27.845 19:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:27.845 19:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:27.845 19:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:27.845 19:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:27.845 19:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.104 19:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.104 19:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.104 19:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.104 19:08:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTM0MDRmNDBhNWNlNmM4MjJjNWRiODY2Mzg1OWM0MWHhFORF: --dhchap-ctrl-secret DHHC-1:02:MjQwZDEzMjkzYTg4M2Y3YzQ1Y2FjZmY3NjE0N2UwOGI0OGYwM2NmYWJjN2Y1ZTYyEjfwJQ==: 00:17:29.041 19:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.299 19:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:29.299 19:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.299 19:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.299 19:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.299 19:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.299 19:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:29.299 19:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:29.299 19:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:17:29.299 19:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.299 19:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:29.299 19:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:29.299 19:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:29.299 19:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.299 19:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.300 19:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.300 19:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.300 19:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.300 19:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.300 19:08:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.558 00:17:29.558 19:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:29.558 19:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:29.558 19:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.817 19:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.817 19:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.817 19:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.817 19:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.817 19:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.817 19:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:29.817 { 00:17:29.817 "cntlid": 29, 00:17:29.817 "qid": 0, 00:17:29.817 "state": "enabled", 00:17:29.817 "thread": "nvmf_tgt_poll_group_000", 00:17:29.817 "listen_address": { 00:17:29.817 "trtype": "RDMA", 00:17:29.817 "adrfam": "IPv4", 00:17:29.817 "traddr": "192.168.100.8", 00:17:29.817 "trsvcid": "4420" 00:17:29.817 }, 00:17:29.817 "peer_address": { 00:17:29.817 "trtype": "RDMA", 00:17:29.817 "adrfam": "IPv4", 00:17:29.817 "traddr": "192.168.100.8", 00:17:29.817 "trsvcid": "59582" 00:17:29.817 }, 00:17:29.817 "auth": { 00:17:29.817 "state": "completed", 00:17:29.817 "digest": "sha256", 00:17:29.817 "dhgroup": "ffdhe4096" 00:17:29.817 } 00:17:29.817 } 00:17:29.817 ]' 00:17:29.817 19:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:29.817 19:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:29.817 19:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.076 19:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:30.076 19:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.076 19:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.076 19:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.076 19:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.335 19:08:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmExNGVkZDYzM2U2MjU0MjJhNGVkY2Y0NDBlNTU1ZWNhNzg0Y2Y4OTMzMzk0ODNhHXNO5Q==: --dhchap-ctrl-secret DHHC-1:01:NzJkNDRkZmY5YjRlYjQwMmZkMDRjNjkxY2Y0ODI1MTfV3eFz: 00:17:30.902 19:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.159 19:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:31.159 19:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.159 19:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.159 19:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.159 19:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.159 19:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:31.159 19:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:31.418 19:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:17:31.418 19:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.418 19:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:31.418 19:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:31.418 19:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:31.418 19:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.418 19:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:17:31.418 19:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.418 19:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.418 19:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.418 19:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:31.418 19:08:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:31.677 00:17:31.677 19:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.677 19:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.677 19:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.935 19:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.935 19:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.935 19:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.935 19:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.935 19:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.935 19:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.935 { 00:17:31.935 "cntlid": 31, 00:17:31.935 "qid": 0, 00:17:31.935 "state": "enabled", 00:17:31.935 "thread": "nvmf_tgt_poll_group_000", 00:17:31.935 "listen_address": { 00:17:31.935 "trtype": "RDMA", 00:17:31.935 "adrfam": "IPv4", 00:17:31.935 "traddr": "192.168.100.8", 00:17:31.935 "trsvcid": "4420" 00:17:31.935 }, 00:17:31.935 "peer_address": { 00:17:31.935 "trtype": "RDMA", 00:17:31.935 "adrfam": "IPv4", 00:17:31.935 "traddr": "192.168.100.8", 00:17:31.935 "trsvcid": "52238" 00:17:31.935 }, 00:17:31.935 "auth": { 00:17:31.935 "state": "completed", 00:17:31.935 "digest": "sha256", 00:17:31.935 "dhgroup": "ffdhe4096" 00:17:31.935 } 00:17:31.935 } 00:17:31.935 ]' 00:17:31.935 19:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.935 19:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:31.935 19:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:31.935 19:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:31.935 19:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:31.935 19:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.935 19:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.935 19:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.194 19:08:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTI5ZWUwNDhmZTkwMDdhNDAwMzMwNzc4MTBiM2E2NjFhZjRjZDExNTUyOThhYTJjOTM2ZmZhNDYxZTY3NWQ5NLD7Guo=: 00:17:33.129 19:08:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.388 19:08:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:33.388 19:08:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.388 19:08:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.388 19:08:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.388 19:08:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:33.388 19:08:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.388 19:08:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:33.388 19:08:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:33.648 19:08:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:17:33.648 19:08:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.648 19:08:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:33.648 19:08:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:33.648 19:08:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:33.648 19:08:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.648 19:08:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.648 19:08:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.648 19:08:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.648 19:08:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.648 19:08:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.648 19:08:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.906 00:17:33.906 19:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:33.906 19:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:33.906 19:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.165 19:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.165 19:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.165 19:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.165 19:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.165 19:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.165 19:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.165 { 00:17:34.165 "cntlid": 33, 00:17:34.165 "qid": 0, 00:17:34.165 "state": "enabled", 00:17:34.165 "thread": "nvmf_tgt_poll_group_000", 00:17:34.165 "listen_address": { 00:17:34.165 "trtype": "RDMA", 00:17:34.165 "adrfam": "IPv4", 00:17:34.165 "traddr": "192.168.100.8", 00:17:34.165 "trsvcid": "4420" 00:17:34.165 }, 00:17:34.165 "peer_address": { 00:17:34.165 "trtype": "RDMA", 00:17:34.165 "adrfam": "IPv4", 00:17:34.165 "traddr": "192.168.100.8", 00:17:34.165 "trsvcid": "48344" 00:17:34.165 }, 00:17:34.165 "auth": { 00:17:34.165 "state": "completed", 00:17:34.165 "digest": "sha256", 00:17:34.165 "dhgroup": "ffdhe6144" 00:17:34.165 } 00:17:34.165 } 00:17:34.165 ]' 00:17:34.165 19:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.165 19:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:34.165 19:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.165 19:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:34.165 19:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.165 19:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.165 19:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.165 19:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.424 19:08:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTc4NjA1NzI5NjMyOGM0MjI2NTliZGM4NTk0ZjdiMDEwZmFjNWZiNGEwYzRhODk5Vo/hKA==: --dhchap-ctrl-secret DHHC-1:03:NWFmMDhmYjE4OTA2MmE0ZGYxZGU2NzljNDI4ZjRmOGE2NTljMDdmNDJlYjhmYzMwNzA3OTE3YThiNDJlNjVjM8S0ot4=: 00:17:35.361 19:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.361 19:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:35.361 19:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.361 19:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.361 19:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.361 19:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.361 19:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:35.361 19:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:35.620 19:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:17:35.620 19:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.620 19:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:35.620 19:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:35.620 19:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:35.620 19:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.620 19:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.620 19:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.620 19:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.620 19:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.620 19:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.620 19:08:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.879 00:17:36.138 19:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:36.138 19:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:36.138 19:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.138 19:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.138 19:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.138 19:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.138 19:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.138 19:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.138 19:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:36.138 { 00:17:36.138 "cntlid": 35, 00:17:36.138 "qid": 0, 00:17:36.138 "state": "enabled", 00:17:36.138 "thread": "nvmf_tgt_poll_group_000", 00:17:36.138 "listen_address": { 00:17:36.138 "trtype": "RDMA", 00:17:36.138 "adrfam": "IPv4", 00:17:36.138 "traddr": "192.168.100.8", 00:17:36.138 "trsvcid": "4420" 00:17:36.138 }, 00:17:36.138 "peer_address": { 00:17:36.138 "trtype": "RDMA", 00:17:36.138 "adrfam": "IPv4", 00:17:36.138 "traddr": "192.168.100.8", 00:17:36.138 "trsvcid": "46559" 00:17:36.138 }, 00:17:36.138 "auth": { 00:17:36.138 "state": "completed", 00:17:36.138 "digest": "sha256", 00:17:36.138 "dhgroup": "ffdhe6144" 00:17:36.138 } 00:17:36.138 } 00:17:36.138 ]' 00:17:36.138 19:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.398 19:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:36.398 19:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.398 19:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:36.398 19:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.398 19:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.398 19:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.398 19:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.656 19:08:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTM0MDRmNDBhNWNlNmM4MjJjNWRiODY2Mzg1OWM0MWHhFORF: --dhchap-ctrl-secret DHHC-1:02:MjQwZDEzMjkzYTg4M2Y3YzQ1Y2FjZmY3NjE0N2UwOGI0OGYwM2NmYWJjN2Y1ZTYyEjfwJQ==: 00:17:37.223 19:08:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.482 19:08:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:37.482 19:08:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.482 19:08:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.482 19:08:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.482 19:08:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.482 19:08:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:37.482 19:08:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:37.741 19:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:17:37.741 19:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.741 19:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:37.741 19:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:37.741 19:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:37.741 19:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.741 19:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.741 19:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.741 19:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.741 19:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.741 19:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.741 19:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.309 00:17:38.310 19:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:38.310 19:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:38.310 19:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.310 19:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.310 19:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.310 19:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.310 19:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.310 19:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.310 19:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.310 { 00:17:38.310 "cntlid": 37, 00:17:38.310 "qid": 0, 00:17:38.310 "state": "enabled", 00:17:38.310 "thread": "nvmf_tgt_poll_group_000", 00:17:38.310 "listen_address": { 00:17:38.310 "trtype": "RDMA", 00:17:38.310 "adrfam": "IPv4", 00:17:38.310 "traddr": "192.168.100.8", 00:17:38.310 "trsvcid": "4420" 00:17:38.310 }, 00:17:38.310 "peer_address": { 00:17:38.310 "trtype": "RDMA", 00:17:38.310 "adrfam": "IPv4", 00:17:38.310 "traddr": "192.168.100.8", 00:17:38.310 "trsvcid": "51421" 00:17:38.310 }, 00:17:38.310 "auth": { 00:17:38.310 "state": "completed", 00:17:38.310 "digest": "sha256", 00:17:38.310 "dhgroup": "ffdhe6144" 00:17:38.310 } 00:17:38.310 } 00:17:38.310 ]' 00:17:38.310 19:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:38.310 19:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:38.310 19:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.310 19:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:38.310 19:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.568 19:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.568 19:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.568 19:08:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.568 19:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmExNGVkZDYzM2U2MjU0MjJhNGVkY2Y0NDBlNTU1ZWNhNzg0Y2Y4OTMzMzk0ODNhHXNO5Q==: --dhchap-ctrl-secret DHHC-1:01:NzJkNDRkZmY5YjRlYjQwMmZkMDRjNjkxY2Y0ODI1MTfV3eFz: 00:17:39.504 19:08:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.762 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:39.762 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.762 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.762 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.762 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:39.762 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:39.762 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:40.020 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:17:40.020 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.020 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:40.020 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:40.020 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:40.020 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.020 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:17:40.020 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.021 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.021 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.021 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:40.021 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:40.279 00:17:40.279 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.279 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.279 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.538 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.538 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.538 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.538 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.538 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.538 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.538 { 00:17:40.538 "cntlid": 39, 00:17:40.538 "qid": 0, 00:17:40.538 "state": "enabled", 00:17:40.538 "thread": "nvmf_tgt_poll_group_000", 00:17:40.538 "listen_address": { 00:17:40.538 "trtype": "RDMA", 00:17:40.538 "adrfam": "IPv4", 00:17:40.538 "traddr": "192.168.100.8", 00:17:40.538 "trsvcid": "4420" 00:17:40.538 }, 00:17:40.538 "peer_address": { 00:17:40.538 "trtype": "RDMA", 00:17:40.538 "adrfam": "IPv4", 00:17:40.538 "traddr": "192.168.100.8", 00:17:40.538 "trsvcid": "38410" 00:17:40.538 }, 00:17:40.538 "auth": { 00:17:40.538 "state": "completed", 00:17:40.538 "digest": "sha256", 00:17:40.538 "dhgroup": "ffdhe6144" 00:17:40.538 } 00:17:40.538 } 00:17:40.538 ]' 00:17:40.538 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.538 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:40.538 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.538 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:40.538 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.538 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.538 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.538 19:08:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.796 19:08:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTI5ZWUwNDhmZTkwMDdhNDAwMzMwNzc4MTBiM2E2NjFhZjRjZDExNTUyOThhYTJjOTM2ZmZhNDYxZTY3NWQ5NLD7Guo=: 00:17:41.732 19:08:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.732 19:08:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:41.732 19:08:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.732 19:08:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.732 19:08:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.732 19:08:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:41.732 19:08:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.732 19:08:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:41.732 19:08:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:41.991 19:08:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:17:41.991 19:08:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.991 19:08:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:41.991 19:08:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:41.991 19:08:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:41.991 19:08:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.991 19:08:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.991 19:08:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.991 19:08:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.991 19:08:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.991 19:08:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.991 19:08:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.559 00:17:42.559 19:08:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.559 19:08:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.559 19:08:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.818 19:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.818 19:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.818 19:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.818 19:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.818 19:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.818 19:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.818 { 00:17:42.818 "cntlid": 41, 00:17:42.818 "qid": 0, 00:17:42.818 "state": "enabled", 00:17:42.818 "thread": "nvmf_tgt_poll_group_000", 00:17:42.818 "listen_address": { 00:17:42.818 "trtype": "RDMA", 00:17:42.818 "adrfam": "IPv4", 00:17:42.818 "traddr": "192.168.100.8", 00:17:42.818 "trsvcid": "4420" 00:17:42.818 }, 00:17:42.818 "peer_address": { 00:17:42.818 "trtype": "RDMA", 00:17:42.818 "adrfam": "IPv4", 00:17:42.818 "traddr": "192.168.100.8", 00:17:42.818 "trsvcid": "38540" 00:17:42.818 }, 00:17:42.818 "auth": { 00:17:42.818 "state": "completed", 00:17:42.818 "digest": "sha256", 00:17:42.818 "dhgroup": "ffdhe8192" 00:17:42.818 } 00:17:42.818 } 00:17:42.818 ]' 00:17:42.818 19:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.818 19:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:42.818 19:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.818 19:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:42.818 19:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.818 19:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.818 19:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.818 19:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.077 19:08:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTc4NjA1NzI5NjMyOGM0MjI2NTliZGM4NTk0ZjdiMDEwZmFjNWZiNGEwYzRhODk5Vo/hKA==: --dhchap-ctrl-secret DHHC-1:03:NWFmMDhmYjE4OTA2MmE0ZGYxZGU2NzljNDI4ZjRmOGE2NTljMDdmNDJlYjhmYzMwNzA3OTE3YThiNDJlNjVjM8S0ot4=: 00:17:44.013 19:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.013 19:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:44.013 19:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.013 19:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.013 19:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.013 19:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.013 19:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:44.014 19:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:44.272 19:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:17:44.272 19:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.272 19:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:44.272 19:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:44.272 19:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:44.272 19:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.272 19:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.272 19:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.272 19:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.272 19:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.272 19:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.272 19:08:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.840 00:17:44.840 19:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.840 19:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.840 19:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.840 19:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.840 19:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.840 19:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.840 19:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.099 19:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.099 19:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:45.099 { 00:17:45.099 "cntlid": 43, 00:17:45.099 "qid": 0, 00:17:45.099 "state": "enabled", 00:17:45.099 "thread": "nvmf_tgt_poll_group_000", 00:17:45.099 "listen_address": { 00:17:45.099 "trtype": "RDMA", 00:17:45.099 "adrfam": "IPv4", 00:17:45.099 "traddr": "192.168.100.8", 00:17:45.099 "trsvcid": "4420" 00:17:45.099 }, 00:17:45.099 "peer_address": { 00:17:45.099 "trtype": "RDMA", 00:17:45.099 "adrfam": "IPv4", 00:17:45.099 "traddr": "192.168.100.8", 00:17:45.099 "trsvcid": "44402" 00:17:45.099 }, 00:17:45.099 "auth": { 00:17:45.099 "state": "completed", 00:17:45.099 "digest": "sha256", 00:17:45.099 "dhgroup": "ffdhe8192" 00:17:45.099 } 00:17:45.099 } 00:17:45.099 ]' 00:17:45.099 19:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:45.099 19:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:45.099 19:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:45.099 19:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:45.099 19:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:45.099 19:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.099 19:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.099 19:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.358 19:08:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTM0MDRmNDBhNWNlNmM4MjJjNWRiODY2Mzg1OWM0MWHhFORF: --dhchap-ctrl-secret DHHC-1:02:MjQwZDEzMjkzYTg4M2Y3YzQ1Y2FjZmY3NjE0N2UwOGI0OGYwM2NmYWJjN2Y1ZTYyEjfwJQ==: 00:17:46.294 19:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.294 19:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:46.294 19:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.294 19:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.294 19:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.294 19:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:46.294 19:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:46.294 19:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:46.552 19:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:17:46.552 19:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.552 19:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:46.552 19:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:46.552 19:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:46.552 19:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.552 19:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.552 19:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.552 19:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.552 19:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.553 19:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.553 19:08:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.120 00:17:47.120 19:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.120 19:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.120 19:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.120 19:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.120 19:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.120 19:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.120 19:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.120 19:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.120 19:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.120 { 00:17:47.120 "cntlid": 45, 00:17:47.120 "qid": 0, 00:17:47.120 "state": "enabled", 00:17:47.120 "thread": "nvmf_tgt_poll_group_000", 00:17:47.120 "listen_address": { 00:17:47.120 "trtype": "RDMA", 00:17:47.120 "adrfam": "IPv4", 00:17:47.120 "traddr": "192.168.100.8", 00:17:47.120 "trsvcid": "4420" 00:17:47.120 }, 00:17:47.120 "peer_address": { 00:17:47.120 "trtype": "RDMA", 00:17:47.120 "adrfam": "IPv4", 00:17:47.120 "traddr": "192.168.100.8", 00:17:47.120 "trsvcid": "54476" 00:17:47.120 }, 00:17:47.120 "auth": { 00:17:47.120 "state": "completed", 00:17:47.120 "digest": "sha256", 00:17:47.120 "dhgroup": "ffdhe8192" 00:17:47.120 } 00:17:47.121 } 00:17:47.121 ]' 00:17:47.121 19:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:47.380 19:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:47.380 19:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:47.380 19:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:47.380 19:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:47.380 19:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.380 19:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.380 19:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.639 19:08:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmExNGVkZDYzM2U2MjU0MjJhNGVkY2Y0NDBlNTU1ZWNhNzg0Y2Y4OTMzMzk0ODNhHXNO5Q==: --dhchap-ctrl-secret DHHC-1:01:NzJkNDRkZmY5YjRlYjQwMmZkMDRjNjkxY2Y0ODI1MTfV3eFz: 00:17:48.206 19:08:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.465 19:08:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:48.465 19:08:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.465 19:08:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.465 19:08:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.465 19:08:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.465 19:08:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:48.465 19:08:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:48.724 19:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:17:48.724 19:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.724 19:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:48.724 19:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:48.724 19:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:48.724 19:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.724 19:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:17:48.724 19:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.724 19:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.724 19:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.724 19:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:48.724 19:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:49.294 00:17:49.294 19:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.294 19:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.294 19:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.554 19:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.554 19:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.554 19:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.554 19:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.554 19:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.554 19:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.554 { 00:17:49.554 "cntlid": 47, 00:17:49.554 "qid": 0, 00:17:49.554 "state": "enabled", 00:17:49.554 "thread": "nvmf_tgt_poll_group_000", 00:17:49.554 "listen_address": { 00:17:49.554 "trtype": "RDMA", 00:17:49.554 "adrfam": "IPv4", 00:17:49.554 "traddr": "192.168.100.8", 00:17:49.554 "trsvcid": "4420" 00:17:49.554 }, 00:17:49.554 "peer_address": { 00:17:49.554 "trtype": "RDMA", 00:17:49.554 "adrfam": "IPv4", 00:17:49.554 "traddr": "192.168.100.8", 00:17:49.554 "trsvcid": "37013" 00:17:49.554 }, 00:17:49.554 "auth": { 00:17:49.554 "state": "completed", 00:17:49.554 "digest": "sha256", 00:17:49.554 "dhgroup": "ffdhe8192" 00:17:49.554 } 00:17:49.554 } 00:17:49.554 ]' 00:17:49.554 19:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.554 19:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:49.554 19:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.554 19:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:49.554 19:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.554 19:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.554 19:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.554 19:08:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.813 19:08:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTI5ZWUwNDhmZTkwMDdhNDAwMzMwNzc4MTBiM2E2NjFhZjRjZDExNTUyOThhYTJjOTM2ZmZhNDYxZTY3NWQ5NLD7Guo=: 00:17:50.748 19:08:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.748 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:50.748 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.748 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.748 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.748 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:50.748 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.748 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.748 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:50.749 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:51.007 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:17:51.007 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.007 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:51.007 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:51.007 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:51.007 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.007 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.007 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.007 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.007 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.007 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.008 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.265 00:17:51.265 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.265 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.265 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.524 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.524 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.524 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.524 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.524 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.524 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.524 { 00:17:51.524 "cntlid": 49, 00:17:51.524 "qid": 0, 00:17:51.524 "state": "enabled", 00:17:51.524 "thread": "nvmf_tgt_poll_group_000", 00:17:51.524 "listen_address": { 00:17:51.524 "trtype": "RDMA", 00:17:51.524 "adrfam": "IPv4", 00:17:51.524 "traddr": "192.168.100.8", 00:17:51.524 "trsvcid": "4420" 00:17:51.524 }, 00:17:51.524 "peer_address": { 00:17:51.524 "trtype": "RDMA", 00:17:51.524 "adrfam": "IPv4", 00:17:51.524 "traddr": "192.168.100.8", 00:17:51.524 "trsvcid": "52174" 00:17:51.524 }, 00:17:51.524 "auth": { 00:17:51.524 "state": "completed", 00:17:51.524 "digest": "sha384", 00:17:51.524 "dhgroup": "null" 00:17:51.524 } 00:17:51.524 } 00:17:51.524 ]' 00:17:51.524 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.524 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:51.524 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.524 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:51.524 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.524 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.524 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.524 19:08:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.783 19:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTc4NjA1NzI5NjMyOGM0MjI2NTliZGM4NTk0ZjdiMDEwZmFjNWZiNGEwYzRhODk5Vo/hKA==: --dhchap-ctrl-secret DHHC-1:03:NWFmMDhmYjE4OTA2MmE0ZGYxZGU2NzljNDI4ZjRmOGE2NTljMDdmNDJlYjhmYzMwNzA3OTE3YThiNDJlNjVjM8S0ot4=: 00:17:52.720 19:08:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.720 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:52.720 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.720 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.720 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.720 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.720 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:52.720 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:52.977 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:17:52.977 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.977 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:52.977 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:52.977 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:52.977 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.977 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.977 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.977 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.977 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.977 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.977 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.236 00:17:53.236 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.236 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.236 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.493 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.493 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.493 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.493 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.493 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.493 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:53.493 { 00:17:53.493 "cntlid": 51, 00:17:53.493 "qid": 0, 00:17:53.493 "state": "enabled", 00:17:53.493 "thread": "nvmf_tgt_poll_group_000", 00:17:53.493 "listen_address": { 00:17:53.493 "trtype": "RDMA", 00:17:53.493 "adrfam": "IPv4", 00:17:53.493 "traddr": "192.168.100.8", 00:17:53.493 "trsvcid": "4420" 00:17:53.493 }, 00:17:53.493 "peer_address": { 00:17:53.493 "trtype": "RDMA", 00:17:53.493 "adrfam": "IPv4", 00:17:53.493 "traddr": "192.168.100.8", 00:17:53.493 "trsvcid": "46276" 00:17:53.493 }, 00:17:53.493 "auth": { 00:17:53.493 "state": "completed", 00:17:53.493 "digest": "sha384", 00:17:53.493 "dhgroup": "null" 00:17:53.493 } 00:17:53.493 } 00:17:53.493 ]' 00:17:53.493 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:53.493 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:53.493 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:53.493 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:53.493 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:53.493 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.493 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.493 19:08:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.750 19:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTM0MDRmNDBhNWNlNmM4MjJjNWRiODY2Mzg1OWM0MWHhFORF: --dhchap-ctrl-secret DHHC-1:02:MjQwZDEzMjkzYTg4M2Y3YzQ1Y2FjZmY3NjE0N2UwOGI0OGYwM2NmYWJjN2Y1ZTYyEjfwJQ==: 00:17:54.682 19:08:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.682 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:54.682 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.682 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.682 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.682 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:54.682 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:54.682 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:54.940 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:17:54.940 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.940 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:54.940 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:54.940 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:54.940 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.940 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.940 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.940 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.940 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.940 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.940 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.199 00:17:55.199 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.199 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.199 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.457 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.458 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.458 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.458 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.458 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.458 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.458 { 00:17:55.458 "cntlid": 53, 00:17:55.458 "qid": 0, 00:17:55.458 "state": "enabled", 00:17:55.458 "thread": "nvmf_tgt_poll_group_000", 00:17:55.458 "listen_address": { 00:17:55.458 "trtype": "RDMA", 00:17:55.458 "adrfam": "IPv4", 00:17:55.458 "traddr": "192.168.100.8", 00:17:55.458 "trsvcid": "4420" 00:17:55.458 }, 00:17:55.458 "peer_address": { 00:17:55.458 "trtype": "RDMA", 00:17:55.458 "adrfam": "IPv4", 00:17:55.458 "traddr": "192.168.100.8", 00:17:55.458 "trsvcid": "53830" 00:17:55.458 }, 00:17:55.458 "auth": { 00:17:55.458 "state": "completed", 00:17:55.458 "digest": "sha384", 00:17:55.458 "dhgroup": "null" 00:17:55.458 } 00:17:55.458 } 00:17:55.458 ]' 00:17:55.458 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.458 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:55.458 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.458 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:55.458 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.458 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.458 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.458 19:08:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.716 19:08:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmExNGVkZDYzM2U2MjU0MjJhNGVkY2Y0NDBlNTU1ZWNhNzg0Y2Y4OTMzMzk0ODNhHXNO5Q==: --dhchap-ctrl-secret DHHC-1:01:NzJkNDRkZmY5YjRlYjQwMmZkMDRjNjkxY2Y0ODI1MTfV3eFz: 00:17:56.651 19:08:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.910 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:56.910 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.910 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.910 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.910 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.910 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:56.910 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:56.910 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:17:56.910 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.910 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:56.910 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:56.910 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:56.910 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.910 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:17:56.910 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.910 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.910 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.910 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.910 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:57.168 00:17:57.168 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.168 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:57.168 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.427 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.427 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.427 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.427 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.427 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.427 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.427 { 00:17:57.427 "cntlid": 55, 00:17:57.427 "qid": 0, 00:17:57.427 "state": "enabled", 00:17:57.427 "thread": "nvmf_tgt_poll_group_000", 00:17:57.427 "listen_address": { 00:17:57.427 "trtype": "RDMA", 00:17:57.427 "adrfam": "IPv4", 00:17:57.427 "traddr": "192.168.100.8", 00:17:57.427 "trsvcid": "4420" 00:17:57.427 }, 00:17:57.427 "peer_address": { 00:17:57.427 "trtype": "RDMA", 00:17:57.427 "adrfam": "IPv4", 00:17:57.427 "traddr": "192.168.100.8", 00:17:57.427 "trsvcid": "40967" 00:17:57.427 }, 00:17:57.427 "auth": { 00:17:57.427 "state": "completed", 00:17:57.427 "digest": "sha384", 00:17:57.427 "dhgroup": "null" 00:17:57.427 } 00:17:57.427 } 00:17:57.427 ]' 00:17:57.427 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.427 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:57.427 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.685 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:57.685 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.685 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.685 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.685 19:08:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.943 19:08:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTI5ZWUwNDhmZTkwMDdhNDAwMzMwNzc4MTBiM2E2NjFhZjRjZDExNTUyOThhYTJjOTM2ZmZhNDYxZTY3NWQ5NLD7Guo=: 00:17:58.511 19:08:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.770 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:17:58.770 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.770 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.770 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.770 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:58.770 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.770 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:58.770 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:59.028 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:17:59.028 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.028 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:59.028 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:59.028 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:59.028 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.028 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.028 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.028 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.028 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.028 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.028 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.287 00:17:59.287 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.287 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.287 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.546 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.546 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.546 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.546 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.546 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.546 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.546 { 00:17:59.546 "cntlid": 57, 00:17:59.546 "qid": 0, 00:17:59.546 "state": "enabled", 00:17:59.546 "thread": "nvmf_tgt_poll_group_000", 00:17:59.546 "listen_address": { 00:17:59.546 "trtype": "RDMA", 00:17:59.546 "adrfam": "IPv4", 00:17:59.546 "traddr": "192.168.100.8", 00:17:59.546 "trsvcid": "4420" 00:17:59.546 }, 00:17:59.546 "peer_address": { 00:17:59.546 "trtype": "RDMA", 00:17:59.546 "adrfam": "IPv4", 00:17:59.546 "traddr": "192.168.100.8", 00:17:59.546 "trsvcid": "57764" 00:17:59.546 }, 00:17:59.546 "auth": { 00:17:59.546 "state": "completed", 00:17:59.546 "digest": "sha384", 00:17:59.546 "dhgroup": "ffdhe2048" 00:17:59.546 } 00:17:59.546 } 00:17:59.546 ]' 00:17:59.546 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.546 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:59.546 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.546 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:59.546 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.546 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.546 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.546 19:08:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.805 19:08:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTc4NjA1NzI5NjMyOGM0MjI2NTliZGM4NTk0ZjdiMDEwZmFjNWZiNGEwYzRhODk5Vo/hKA==: --dhchap-ctrl-secret DHHC-1:03:NWFmMDhmYjE4OTA2MmE0ZGYxZGU2NzljNDI4ZjRmOGE2NTljMDdmNDJlYjhmYzMwNzA3OTE3YThiNDJlNjVjM8S0ot4=: 00:18:00.740 19:08:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.740 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:18:00.740 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.740 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.740 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.740 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.740 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:00.740 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:01.000 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:01.000 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.000 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:01.000 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:01.000 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:01.000 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.000 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.000 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.000 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.000 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.000 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.000 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.258 00:18:01.258 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.258 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.258 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.515 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.515 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.515 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.515 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.515 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.515 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.515 { 00:18:01.515 "cntlid": 59, 00:18:01.515 "qid": 0, 00:18:01.515 "state": "enabled", 00:18:01.515 "thread": "nvmf_tgt_poll_group_000", 00:18:01.515 "listen_address": { 00:18:01.515 "trtype": "RDMA", 00:18:01.515 "adrfam": "IPv4", 00:18:01.515 "traddr": "192.168.100.8", 00:18:01.515 "trsvcid": "4420" 00:18:01.515 }, 00:18:01.515 "peer_address": { 00:18:01.515 "trtype": "RDMA", 00:18:01.515 "adrfam": "IPv4", 00:18:01.515 "traddr": "192.168.100.8", 00:18:01.515 "trsvcid": "51621" 00:18:01.515 }, 00:18:01.515 "auth": { 00:18:01.515 "state": "completed", 00:18:01.515 "digest": "sha384", 00:18:01.515 "dhgroup": "ffdhe2048" 00:18:01.515 } 00:18:01.515 } 00:18:01.515 ]' 00:18:01.515 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.515 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:01.515 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.515 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:01.515 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.773 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.773 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.773 19:08:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.773 19:08:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTM0MDRmNDBhNWNlNmM4MjJjNWRiODY2Mzg1OWM0MWHhFORF: --dhchap-ctrl-secret DHHC-1:02:MjQwZDEzMjkzYTg4M2Y3YzQ1Y2FjZmY3NjE0N2UwOGI0OGYwM2NmYWJjN2Y1ZTYyEjfwJQ==: 00:18:02.709 19:08:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.968 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:18:02.968 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.968 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.968 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.968 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.968 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:02.968 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:02.968 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:02.968 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.968 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:02.968 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:02.968 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:02.968 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.968 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.968 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.968 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.968 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.968 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.968 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.227 00:18:03.227 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.227 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.227 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.485 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.485 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.485 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.485 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.485 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.485 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.485 { 00:18:03.485 "cntlid": 61, 00:18:03.485 "qid": 0, 00:18:03.485 "state": "enabled", 00:18:03.485 "thread": "nvmf_tgt_poll_group_000", 00:18:03.485 "listen_address": { 00:18:03.485 "trtype": "RDMA", 00:18:03.485 "adrfam": "IPv4", 00:18:03.485 "traddr": "192.168.100.8", 00:18:03.485 "trsvcid": "4420" 00:18:03.485 }, 00:18:03.485 "peer_address": { 00:18:03.485 "trtype": "RDMA", 00:18:03.485 "adrfam": "IPv4", 00:18:03.485 "traddr": "192.168.100.8", 00:18:03.485 "trsvcid": "55782" 00:18:03.485 }, 00:18:03.485 "auth": { 00:18:03.485 "state": "completed", 00:18:03.486 "digest": "sha384", 00:18:03.486 "dhgroup": "ffdhe2048" 00:18:03.486 } 00:18:03.486 } 00:18:03.486 ]' 00:18:03.486 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.486 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:03.486 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.745 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:03.745 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.745 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.745 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.745 19:08:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.745 19:08:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmExNGVkZDYzM2U2MjU0MjJhNGVkY2Y0NDBlNTU1ZWNhNzg0Y2Y4OTMzMzk0ODNhHXNO5Q==: --dhchap-ctrl-secret DHHC-1:01:NzJkNDRkZmY5YjRlYjQwMmZkMDRjNjkxY2Y0ODI1MTfV3eFz: 00:18:04.681 19:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.940 19:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:18:04.940 19:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.940 19:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.940 19:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.940 19:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.940 19:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:04.940 19:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:05.199 19:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:05.199 19:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.199 19:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:05.199 19:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:05.199 19:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:05.199 19:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.199 19:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:18:05.199 19:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.199 19:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.199 19:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.199 19:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.199 19:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.199 00:18:05.458 19:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.458 19:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.458 19:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.458 19:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.458 19:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.458 19:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.458 19:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.458 19:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.458 19:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.458 { 00:18:05.458 "cntlid": 63, 00:18:05.458 "qid": 0, 00:18:05.458 "state": "enabled", 00:18:05.458 "thread": "nvmf_tgt_poll_group_000", 00:18:05.458 "listen_address": { 00:18:05.458 "trtype": "RDMA", 00:18:05.458 "adrfam": "IPv4", 00:18:05.458 "traddr": "192.168.100.8", 00:18:05.458 "trsvcid": "4420" 00:18:05.458 }, 00:18:05.458 "peer_address": { 00:18:05.458 "trtype": "RDMA", 00:18:05.458 "adrfam": "IPv4", 00:18:05.458 "traddr": "192.168.100.8", 00:18:05.458 "trsvcid": "38653" 00:18:05.458 }, 00:18:05.458 "auth": { 00:18:05.458 "state": "completed", 00:18:05.458 "digest": "sha384", 00:18:05.458 "dhgroup": "ffdhe2048" 00:18:05.458 } 00:18:05.458 } 00:18:05.458 ]' 00:18:05.458 19:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.458 19:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:05.717 19:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.717 19:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:05.718 19:08:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.718 19:08:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.718 19:08:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.718 19:08:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.976 19:08:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTI5ZWUwNDhmZTkwMDdhNDAwMzMwNzc4MTBiM2E2NjFhZjRjZDExNTUyOThhYTJjOTM2ZmZhNDYxZTY3NWQ5NLD7Guo=: 00:18:06.543 19:08:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.801 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:18:06.801 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.801 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.801 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.801 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:06.801 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.801 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:06.801 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:07.060 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:07.060 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.060 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:07.060 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:07.061 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:07.061 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.061 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.061 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.061 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.061 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.061 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.061 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.320 00:18:07.320 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.320 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.320 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.579 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.579 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.579 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.579 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.579 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.579 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.579 { 00:18:07.579 "cntlid": 65, 00:18:07.579 "qid": 0, 00:18:07.579 "state": "enabled", 00:18:07.579 "thread": "nvmf_tgt_poll_group_000", 00:18:07.579 "listen_address": { 00:18:07.579 "trtype": "RDMA", 00:18:07.579 "adrfam": "IPv4", 00:18:07.579 "traddr": "192.168.100.8", 00:18:07.579 "trsvcid": "4420" 00:18:07.579 }, 00:18:07.579 "peer_address": { 00:18:07.579 "trtype": "RDMA", 00:18:07.579 "adrfam": "IPv4", 00:18:07.579 "traddr": "192.168.100.8", 00:18:07.579 "trsvcid": "42925" 00:18:07.579 }, 00:18:07.579 "auth": { 00:18:07.579 "state": "completed", 00:18:07.579 "digest": "sha384", 00:18:07.579 "dhgroup": "ffdhe3072" 00:18:07.579 } 00:18:07.579 } 00:18:07.579 ]' 00:18:07.579 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.579 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:07.579 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.579 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:07.579 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.579 19:08:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.579 19:09:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.579 19:09:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.837 19:09:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTc4NjA1NzI5NjMyOGM0MjI2NTliZGM4NTk0ZjdiMDEwZmFjNWZiNGEwYzRhODk5Vo/hKA==: --dhchap-ctrl-secret DHHC-1:03:NWFmMDhmYjE4OTA2MmE0ZGYxZGU2NzljNDI4ZjRmOGE2NTljMDdmNDJlYjhmYzMwNzA3OTE3YThiNDJlNjVjM8S0ot4=: 00:18:08.773 19:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.773 19:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:18:08.773 19:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.773 19:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.773 19:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.773 19:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.773 19:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:08.774 19:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:09.033 19:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:09.033 19:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.033 19:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:09.033 19:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:09.033 19:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:09.033 19:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.033 19:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.033 19:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.033 19:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.033 19:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.033 19:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.033 19:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.292 00:18:09.292 19:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.292 19:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.292 19:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.550 19:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.550 19:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.550 19:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.550 19:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.550 19:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.550 19:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.550 { 00:18:09.550 "cntlid": 67, 00:18:09.550 "qid": 0, 00:18:09.550 "state": "enabled", 00:18:09.550 "thread": "nvmf_tgt_poll_group_000", 00:18:09.550 "listen_address": { 00:18:09.550 "trtype": "RDMA", 00:18:09.550 "adrfam": "IPv4", 00:18:09.550 "traddr": "192.168.100.8", 00:18:09.550 "trsvcid": "4420" 00:18:09.550 }, 00:18:09.550 "peer_address": { 00:18:09.550 "trtype": "RDMA", 00:18:09.550 "adrfam": "IPv4", 00:18:09.550 "traddr": "192.168.100.8", 00:18:09.550 "trsvcid": "42128" 00:18:09.550 }, 00:18:09.550 "auth": { 00:18:09.550 "state": "completed", 00:18:09.550 "digest": "sha384", 00:18:09.550 "dhgroup": "ffdhe3072" 00:18:09.550 } 00:18:09.550 } 00:18:09.550 ]' 00:18:09.550 19:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.550 19:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:09.550 19:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.550 19:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:09.550 19:09:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.809 19:09:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.809 19:09:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.809 19:09:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.809 19:09:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTM0MDRmNDBhNWNlNmM4MjJjNWRiODY2Mzg1OWM0MWHhFORF: --dhchap-ctrl-secret DHHC-1:02:MjQwZDEzMjkzYTg4M2Y3YzQ1Y2FjZmY3NjE0N2UwOGI0OGYwM2NmYWJjN2Y1ZTYyEjfwJQ==: 00:18:10.744 19:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.005 19:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:18:11.005 19:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.005 19:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.005 19:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.005 19:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.005 19:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:11.005 19:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:11.263 19:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:11.263 19:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.263 19:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:11.263 19:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:11.263 19:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:11.263 19:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.264 19:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.264 19:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.264 19:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.264 19:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.264 19:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.264 19:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.524 00:18:11.524 19:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.524 19:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.524 19:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.524 19:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.524 19:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.524 19:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.524 19:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.524 19:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.524 19:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.524 { 00:18:11.524 "cntlid": 69, 00:18:11.524 "qid": 0, 00:18:11.524 "state": "enabled", 00:18:11.524 "thread": "nvmf_tgt_poll_group_000", 00:18:11.524 "listen_address": { 00:18:11.524 "trtype": "RDMA", 00:18:11.524 "adrfam": "IPv4", 00:18:11.524 "traddr": "192.168.100.8", 00:18:11.524 "trsvcid": "4420" 00:18:11.524 }, 00:18:11.524 "peer_address": { 00:18:11.524 "trtype": "RDMA", 00:18:11.524 "adrfam": "IPv4", 00:18:11.524 "traddr": "192.168.100.8", 00:18:11.524 "trsvcid": "54248" 00:18:11.524 }, 00:18:11.524 "auth": { 00:18:11.524 "state": "completed", 00:18:11.524 "digest": "sha384", 00:18:11.524 "dhgroup": "ffdhe3072" 00:18:11.524 } 00:18:11.524 } 00:18:11.524 ]' 00:18:11.524 19:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.782 19:09:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:11.782 19:09:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.782 19:09:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:11.782 19:09:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.782 19:09:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.782 19:09:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.782 19:09:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.041 19:09:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmExNGVkZDYzM2U2MjU0MjJhNGVkY2Y0NDBlNTU1ZWNhNzg0Y2Y4OTMzMzk0ODNhHXNO5Q==: --dhchap-ctrl-secret DHHC-1:01:NzJkNDRkZmY5YjRlYjQwMmZkMDRjNjkxY2Y0ODI1MTfV3eFz: 00:18:12.976 19:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.976 19:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:18:12.976 19:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.976 19:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.976 19:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.976 19:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.976 19:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:12.976 19:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:13.235 19:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:13.235 19:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.235 19:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:13.235 19:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:13.235 19:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:13.235 19:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.235 19:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:18:13.235 19:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.235 19:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.235 19:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.235 19:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.235 19:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.494 00:18:13.494 19:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.494 19:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.494 19:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.753 19:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.753 19:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.753 19:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.753 19:09:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.753 19:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.753 19:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.753 { 00:18:13.753 "cntlid": 71, 00:18:13.753 "qid": 0, 00:18:13.753 "state": "enabled", 00:18:13.753 "thread": "nvmf_tgt_poll_group_000", 00:18:13.753 "listen_address": { 00:18:13.753 "trtype": "RDMA", 00:18:13.753 "adrfam": "IPv4", 00:18:13.753 "traddr": "192.168.100.8", 00:18:13.753 "trsvcid": "4420" 00:18:13.753 }, 00:18:13.753 "peer_address": { 00:18:13.753 "trtype": "RDMA", 00:18:13.753 "adrfam": "IPv4", 00:18:13.753 "traddr": "192.168.100.8", 00:18:13.753 "trsvcid": "60913" 00:18:13.753 }, 00:18:13.753 "auth": { 00:18:13.753 "state": "completed", 00:18:13.753 "digest": "sha384", 00:18:13.753 "dhgroup": "ffdhe3072" 00:18:13.753 } 00:18:13.753 } 00:18:13.753 ]' 00:18:13.753 19:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.753 19:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:13.753 19:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.753 19:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:13.753 19:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.753 19:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.753 19:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.753 19:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.012 19:09:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTI5ZWUwNDhmZTkwMDdhNDAwMzMwNzc4MTBiM2E2NjFhZjRjZDExNTUyOThhYTJjOTM2ZmZhNDYxZTY3NWQ5NLD7Guo=: 00:18:14.948 19:09:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.948 19:09:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:18:14.948 19:09:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.948 19:09:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.948 19:09:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.948 19:09:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:14.948 19:09:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.948 19:09:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:14.948 19:09:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:15.206 19:09:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:15.206 19:09:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.206 19:09:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:15.206 19:09:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:15.206 19:09:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:15.206 19:09:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.206 19:09:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.206 19:09:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.206 19:09:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.206 19:09:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.206 19:09:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.206 19:09:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.465 00:18:15.465 19:09:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.465 19:09:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.465 19:09:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.724 19:09:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.724 19:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.724 19:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.724 19:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.724 19:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.724 19:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.724 { 00:18:15.724 "cntlid": 73, 00:18:15.724 "qid": 0, 00:18:15.724 "state": "enabled", 00:18:15.724 "thread": "nvmf_tgt_poll_group_000", 00:18:15.724 "listen_address": { 00:18:15.724 "trtype": "RDMA", 00:18:15.724 "adrfam": "IPv4", 00:18:15.724 "traddr": "192.168.100.8", 00:18:15.724 "trsvcid": "4420" 00:18:15.724 }, 00:18:15.724 "peer_address": { 00:18:15.724 "trtype": "RDMA", 00:18:15.724 "adrfam": "IPv4", 00:18:15.724 "traddr": "192.168.100.8", 00:18:15.724 "trsvcid": "52018" 00:18:15.724 }, 00:18:15.724 "auth": { 00:18:15.724 "state": "completed", 00:18:15.724 "digest": "sha384", 00:18:15.724 "dhgroup": "ffdhe4096" 00:18:15.724 } 00:18:15.724 } 00:18:15.724 ]' 00:18:15.724 19:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.724 19:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:15.724 19:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.724 19:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:15.724 19:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.724 19:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.724 19:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.724 19:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.982 19:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTc4NjA1NzI5NjMyOGM0MjI2NTliZGM4NTk0ZjdiMDEwZmFjNWZiNGEwYzRhODk5Vo/hKA==: --dhchap-ctrl-secret DHHC-1:03:NWFmMDhmYjE4OTA2MmE0ZGYxZGU2NzljNDI4ZjRmOGE2NTljMDdmNDJlYjhmYzMwNzA3OTE3YThiNDJlNjVjM8S0ot4=: 00:18:16.917 19:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.917 19:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:18:16.917 19:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.917 19:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.917 19:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.917 19:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.917 19:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:16.917 19:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:17.175 19:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:17.175 19:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.175 19:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:17.175 19:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:17.175 19:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:17.175 19:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.175 19:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.175 19:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.175 19:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.175 19:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.175 19:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.175 19:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.434 00:18:17.434 19:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.434 19:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.434 19:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.692 19:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.692 19:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.692 19:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.692 19:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.692 19:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.692 19:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.692 { 00:18:17.692 "cntlid": 75, 00:18:17.692 "qid": 0, 00:18:17.692 "state": "enabled", 00:18:17.692 "thread": "nvmf_tgt_poll_group_000", 00:18:17.692 "listen_address": { 00:18:17.692 "trtype": "RDMA", 00:18:17.692 "adrfam": "IPv4", 00:18:17.692 "traddr": "192.168.100.8", 00:18:17.692 "trsvcid": "4420" 00:18:17.692 }, 00:18:17.692 "peer_address": { 00:18:17.692 "trtype": "RDMA", 00:18:17.692 "adrfam": "IPv4", 00:18:17.692 "traddr": "192.168.100.8", 00:18:17.692 "trsvcid": "43905" 00:18:17.692 }, 00:18:17.692 "auth": { 00:18:17.692 "state": "completed", 00:18:17.692 "digest": "sha384", 00:18:17.692 "dhgroup": "ffdhe4096" 00:18:17.692 } 00:18:17.692 } 00:18:17.692 ]' 00:18:17.692 19:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.692 19:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:17.692 19:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.692 19:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:17.692 19:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.951 19:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.951 19:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.951 19:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.951 19:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTM0MDRmNDBhNWNlNmM4MjJjNWRiODY2Mzg1OWM0MWHhFORF: --dhchap-ctrl-secret DHHC-1:02:MjQwZDEzMjkzYTg4M2Y3YzQ1Y2FjZmY3NjE0N2UwOGI0OGYwM2NmYWJjN2Y1ZTYyEjfwJQ==: 00:18:18.887 19:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.146 19:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:18:19.146 19:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.146 19:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.146 19:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.146 19:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.146 19:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:19.146 19:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:19.406 19:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:18:19.406 19:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.406 19:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:19.406 19:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:19.406 19:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:19.406 19:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.406 19:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.406 19:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.406 19:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.406 19:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.406 19:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.406 19:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.665 00:18:19.665 19:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.665 19:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.665 19:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.665 19:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.665 19:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.665 19:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.665 19:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.924 19:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.924 19:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.924 { 00:18:19.924 "cntlid": 77, 00:18:19.924 "qid": 0, 00:18:19.924 "state": "enabled", 00:18:19.924 "thread": "nvmf_tgt_poll_group_000", 00:18:19.924 "listen_address": { 00:18:19.924 "trtype": "RDMA", 00:18:19.924 "adrfam": "IPv4", 00:18:19.924 "traddr": "192.168.100.8", 00:18:19.924 "trsvcid": "4420" 00:18:19.924 }, 00:18:19.924 "peer_address": { 00:18:19.924 "trtype": "RDMA", 00:18:19.924 "adrfam": "IPv4", 00:18:19.925 "traddr": "192.168.100.8", 00:18:19.925 "trsvcid": "56828" 00:18:19.925 }, 00:18:19.925 "auth": { 00:18:19.925 "state": "completed", 00:18:19.925 "digest": "sha384", 00:18:19.925 "dhgroup": "ffdhe4096" 00:18:19.925 } 00:18:19.925 } 00:18:19.925 ]' 00:18:19.925 19:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.925 19:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:19.925 19:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.925 19:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:19.925 19:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.925 19:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.925 19:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.925 19:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.183 19:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmExNGVkZDYzM2U2MjU0MjJhNGVkY2Y0NDBlNTU1ZWNhNzg0Y2Y4OTMzMzk0ODNhHXNO5Q==: --dhchap-ctrl-secret DHHC-1:01:NzJkNDRkZmY5YjRlYjQwMmZkMDRjNjkxY2Y0ODI1MTfV3eFz: 00:18:21.119 19:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.119 19:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:18:21.119 19:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.119 19:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.119 19:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.119 19:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.119 19:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:21.119 19:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:21.377 19:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:21.377 19:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.377 19:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:21.377 19:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:21.377 19:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:21.377 19:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.377 19:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:18:21.377 19:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.377 19:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.377 19:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.377 19:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:21.377 19:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:21.635 00:18:21.635 19:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.635 19:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.635 19:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.894 19:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.894 19:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.894 19:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.894 19:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.894 19:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.894 19:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.894 { 00:18:21.894 "cntlid": 79, 00:18:21.894 "qid": 0, 00:18:21.894 "state": "enabled", 00:18:21.894 "thread": "nvmf_tgt_poll_group_000", 00:18:21.894 "listen_address": { 00:18:21.894 "trtype": "RDMA", 00:18:21.894 "adrfam": "IPv4", 00:18:21.894 "traddr": "192.168.100.8", 00:18:21.894 "trsvcid": "4420" 00:18:21.894 }, 00:18:21.894 "peer_address": { 00:18:21.894 "trtype": "RDMA", 00:18:21.894 "adrfam": "IPv4", 00:18:21.894 "traddr": "192.168.100.8", 00:18:21.894 "trsvcid": "60173" 00:18:21.894 }, 00:18:21.894 "auth": { 00:18:21.894 "state": "completed", 00:18:21.894 "digest": "sha384", 00:18:21.894 "dhgroup": "ffdhe4096" 00:18:21.894 } 00:18:21.894 } 00:18:21.894 ]' 00:18:21.894 19:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.894 19:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:21.894 19:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.894 19:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:21.894 19:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.894 19:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.894 19:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.894 19:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.153 19:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTI5ZWUwNDhmZTkwMDdhNDAwMzMwNzc4MTBiM2E2NjFhZjRjZDExNTUyOThhYTJjOTM2ZmZhNDYxZTY3NWQ5NLD7Guo=: 00:18:23.090 19:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.090 19:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:18:23.090 19:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.090 19:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.090 19:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.090 19:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:23.090 19:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.090 19:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:23.090 19:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:23.348 19:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:23.348 19:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.348 19:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:23.348 19:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:23.349 19:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:23.349 19:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.349 19:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.349 19:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.349 19:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.349 19:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.349 19:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.349 19:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.607 00:18:23.865 19:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.865 19:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.865 19:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.865 19:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.865 19:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.865 19:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.865 19:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.865 19:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.865 19:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.865 { 00:18:23.865 "cntlid": 81, 00:18:23.865 "qid": 0, 00:18:23.865 "state": "enabled", 00:18:23.865 "thread": "nvmf_tgt_poll_group_000", 00:18:23.865 "listen_address": { 00:18:23.865 "trtype": "RDMA", 00:18:23.865 "adrfam": "IPv4", 00:18:23.865 "traddr": "192.168.100.8", 00:18:23.865 "trsvcid": "4420" 00:18:23.865 }, 00:18:23.865 "peer_address": { 00:18:23.865 "trtype": "RDMA", 00:18:23.865 "adrfam": "IPv4", 00:18:23.865 "traddr": "192.168.100.8", 00:18:23.865 "trsvcid": "43930" 00:18:23.865 }, 00:18:23.865 "auth": { 00:18:23.865 "state": "completed", 00:18:23.865 "digest": "sha384", 00:18:23.865 "dhgroup": "ffdhe6144" 00:18:23.865 } 00:18:23.865 } 00:18:23.865 ]' 00:18:23.865 19:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.865 19:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:23.865 19:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.124 19:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:24.124 19:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.124 19:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.124 19:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.124 19:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.382 19:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTc4NjA1NzI5NjMyOGM0MjI2NTliZGM4NTk0ZjdiMDEwZmFjNWZiNGEwYzRhODk5Vo/hKA==: --dhchap-ctrl-secret DHHC-1:03:NWFmMDhmYjE4OTA2MmE0ZGYxZGU2NzljNDI4ZjRmOGE2NTljMDdmNDJlYjhmYzMwNzA3OTE3YThiNDJlNjVjM8S0ot4=: 00:18:24.949 19:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.208 19:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:18:25.208 19:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.208 19:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.208 19:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.208 19:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.208 19:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:25.208 19:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:25.467 19:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:25.467 19:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.467 19:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:25.467 19:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:25.467 19:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:25.467 19:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.467 19:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.467 19:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.467 19:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.467 19:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.467 19:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.467 19:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.725 00:18:25.725 19:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.725 19:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.725 19:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.984 19:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.984 19:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.984 19:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.984 19:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.984 19:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.984 19:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.984 { 00:18:25.984 "cntlid": 83, 00:18:25.984 "qid": 0, 00:18:25.984 "state": "enabled", 00:18:25.984 "thread": "nvmf_tgt_poll_group_000", 00:18:25.984 "listen_address": { 00:18:25.984 "trtype": "RDMA", 00:18:25.984 "adrfam": "IPv4", 00:18:25.984 "traddr": "192.168.100.8", 00:18:25.984 "trsvcid": "4420" 00:18:25.984 }, 00:18:25.984 "peer_address": { 00:18:25.984 "trtype": "RDMA", 00:18:25.984 "adrfam": "IPv4", 00:18:25.984 "traddr": "192.168.100.8", 00:18:25.984 "trsvcid": "47101" 00:18:25.984 }, 00:18:25.984 "auth": { 00:18:25.984 "state": "completed", 00:18:25.984 "digest": "sha384", 00:18:25.984 "dhgroup": "ffdhe6144" 00:18:25.984 } 00:18:25.984 } 00:18:25.984 ]' 00:18:25.984 19:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.984 19:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:25.984 19:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.243 19:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:26.243 19:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.243 19:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.243 19:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.243 19:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.502 19:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTM0MDRmNDBhNWNlNmM4MjJjNWRiODY2Mzg1OWM0MWHhFORF: --dhchap-ctrl-secret DHHC-1:02:MjQwZDEzMjkzYTg4M2Y3YzQ1Y2FjZmY3NjE0N2UwOGI0OGYwM2NmYWJjN2Y1ZTYyEjfwJQ==: 00:18:27.069 19:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.328 19:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:18:27.328 19:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.328 19:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.328 19:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.328 19:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.328 19:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:27.328 19:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:27.587 19:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:27.587 19:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.587 19:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:27.587 19:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:27.587 19:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:27.587 19:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.587 19:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.587 19:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.587 19:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.587 19:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.587 19:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.587 19:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.846 00:18:27.846 19:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.846 19:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.846 19:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.106 19:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.106 19:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.106 19:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.106 19:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.106 19:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.106 19:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.106 { 00:18:28.106 "cntlid": 85, 00:18:28.106 "qid": 0, 00:18:28.106 "state": "enabled", 00:18:28.106 "thread": "nvmf_tgt_poll_group_000", 00:18:28.106 "listen_address": { 00:18:28.106 "trtype": "RDMA", 00:18:28.106 "adrfam": "IPv4", 00:18:28.106 "traddr": "192.168.100.8", 00:18:28.106 "trsvcid": "4420" 00:18:28.106 }, 00:18:28.106 "peer_address": { 00:18:28.106 "trtype": "RDMA", 00:18:28.106 "adrfam": "IPv4", 00:18:28.106 "traddr": "192.168.100.8", 00:18:28.106 "trsvcid": "42359" 00:18:28.106 }, 00:18:28.106 "auth": { 00:18:28.106 "state": "completed", 00:18:28.106 "digest": "sha384", 00:18:28.106 "dhgroup": "ffdhe6144" 00:18:28.106 } 00:18:28.106 } 00:18:28.106 ]' 00:18:28.106 19:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.106 19:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:28.106 19:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.365 19:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:28.365 19:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.365 19:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.365 19:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.365 19:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.624 19:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmExNGVkZDYzM2U2MjU0MjJhNGVkY2Y0NDBlNTU1ZWNhNzg0Y2Y4OTMzMzk0ODNhHXNO5Q==: --dhchap-ctrl-secret DHHC-1:01:NzJkNDRkZmY5YjRlYjQwMmZkMDRjNjkxY2Y0ODI1MTfV3eFz: 00:18:29.191 19:09:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.450 19:09:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:18:29.450 19:09:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.450 19:09:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.450 19:09:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.450 19:09:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.450 19:09:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:29.450 19:09:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:29.709 19:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:29.709 19:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.709 19:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:29.709 19:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:29.709 19:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:29.709 19:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.709 19:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:18:29.709 19:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.709 19:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.709 19:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.709 19:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:29.709 19:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:29.967 00:18:29.967 19:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.967 19:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.967 19:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.226 19:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.226 19:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.226 19:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.226 19:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.226 19:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.226 19:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.226 { 00:18:30.226 "cntlid": 87, 00:18:30.226 "qid": 0, 00:18:30.226 "state": "enabled", 00:18:30.226 "thread": "nvmf_tgt_poll_group_000", 00:18:30.226 "listen_address": { 00:18:30.226 "trtype": "RDMA", 00:18:30.226 "adrfam": "IPv4", 00:18:30.226 "traddr": "192.168.100.8", 00:18:30.226 "trsvcid": "4420" 00:18:30.226 }, 00:18:30.226 "peer_address": { 00:18:30.226 "trtype": "RDMA", 00:18:30.226 "adrfam": "IPv4", 00:18:30.226 "traddr": "192.168.100.8", 00:18:30.226 "trsvcid": "43443" 00:18:30.226 }, 00:18:30.226 "auth": { 00:18:30.226 "state": "completed", 00:18:30.226 "digest": "sha384", 00:18:30.226 "dhgroup": "ffdhe6144" 00:18:30.226 } 00:18:30.226 } 00:18:30.226 ]' 00:18:30.226 19:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.226 19:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:30.226 19:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.485 19:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:30.485 19:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.485 19:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.485 19:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.485 19:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.744 19:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTI5ZWUwNDhmZTkwMDdhNDAwMzMwNzc4MTBiM2E2NjFhZjRjZDExNTUyOThhYTJjOTM2ZmZhNDYxZTY3NWQ5NLD7Guo=: 00:18:31.314 19:09:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.572 19:09:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:18:31.572 19:09:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.572 19:09:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.572 19:09:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.572 19:09:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:31.572 19:09:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.572 19:09:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:31.572 19:09:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:31.831 19:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:18:31.831 19:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.831 19:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:31.831 19:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:31.831 19:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:31.831 19:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.831 19:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.831 19:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.831 19:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.831 19:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.831 19:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.831 19:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.396 00:18:32.396 19:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.396 19:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.396 19:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.655 19:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.655 19:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.655 19:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.655 19:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.655 19:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.655 19:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.655 { 00:18:32.655 "cntlid": 89, 00:18:32.655 "qid": 0, 00:18:32.655 "state": "enabled", 00:18:32.655 "thread": "nvmf_tgt_poll_group_000", 00:18:32.655 "listen_address": { 00:18:32.655 "trtype": "RDMA", 00:18:32.655 "adrfam": "IPv4", 00:18:32.655 "traddr": "192.168.100.8", 00:18:32.655 "trsvcid": "4420" 00:18:32.655 }, 00:18:32.655 "peer_address": { 00:18:32.655 "trtype": "RDMA", 00:18:32.655 "adrfam": "IPv4", 00:18:32.655 "traddr": "192.168.100.8", 00:18:32.655 "trsvcid": "55319" 00:18:32.655 }, 00:18:32.655 "auth": { 00:18:32.655 "state": "completed", 00:18:32.655 "digest": "sha384", 00:18:32.655 "dhgroup": "ffdhe8192" 00:18:32.655 } 00:18:32.655 } 00:18:32.655 ]' 00:18:32.655 19:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.655 19:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:32.655 19:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.655 19:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:32.655 19:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.655 19:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.655 19:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.655 19:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.914 19:09:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTc4NjA1NzI5NjMyOGM0MjI2NTliZGM4NTk0ZjdiMDEwZmFjNWZiNGEwYzRhODk5Vo/hKA==: --dhchap-ctrl-secret DHHC-1:03:NWFmMDhmYjE4OTA2MmE0ZGYxZGU2NzljNDI4ZjRmOGE2NTljMDdmNDJlYjhmYzMwNzA3OTE3YThiNDJlNjVjM8S0ot4=: 00:18:33.850 19:09:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.850 19:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:18:33.850 19:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.850 19:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.850 19:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.850 19:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.850 19:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:33.850 19:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:34.109 19:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:18:34.109 19:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.109 19:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:34.109 19:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:34.109 19:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:34.109 19:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.109 19:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.109 19:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.109 19:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.109 19:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.109 19:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.109 19:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.677 00:18:34.677 19:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.677 19:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.677 19:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.677 19:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.677 19:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.677 19:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.677 19:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.677 19:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.677 19:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.677 { 00:18:34.677 "cntlid": 91, 00:18:34.677 "qid": 0, 00:18:34.677 "state": "enabled", 00:18:34.677 "thread": "nvmf_tgt_poll_group_000", 00:18:34.677 "listen_address": { 00:18:34.677 "trtype": "RDMA", 00:18:34.677 "adrfam": "IPv4", 00:18:34.677 "traddr": "192.168.100.8", 00:18:34.677 "trsvcid": "4420" 00:18:34.677 }, 00:18:34.677 "peer_address": { 00:18:34.677 "trtype": "RDMA", 00:18:34.677 "adrfam": "IPv4", 00:18:34.677 "traddr": "192.168.100.8", 00:18:34.677 "trsvcid": "60422" 00:18:34.677 }, 00:18:34.677 "auth": { 00:18:34.677 "state": "completed", 00:18:34.677 "digest": "sha384", 00:18:34.677 "dhgroup": "ffdhe8192" 00:18:34.677 } 00:18:34.677 } 00:18:34.677 ]' 00:18:34.677 19:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.677 19:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:34.677 19:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.936 19:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:34.936 19:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.936 19:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.936 19:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.936 19:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.208 19:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTM0MDRmNDBhNWNlNmM4MjJjNWRiODY2Mzg1OWM0MWHhFORF: --dhchap-ctrl-secret DHHC-1:02:MjQwZDEzMjkzYTg4M2Y3YzQ1Y2FjZmY3NjE0N2UwOGI0OGYwM2NmYWJjN2Y1ZTYyEjfwJQ==: 00:18:35.831 19:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.120 19:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:18:36.120 19:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.120 19:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.120 19:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.120 19:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:36.120 19:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:36.120 19:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:36.491 19:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:18:36.491 19:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.491 19:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:36.491 19:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:36.491 19:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:36.491 19:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.491 19:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.491 19:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.491 19:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.491 19:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.491 19:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.491 19:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.816 00:18:36.816 19:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.816 19:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.816 19:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.079 19:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.079 19:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.079 19:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.079 19:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.079 19:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.079 19:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.079 { 00:18:37.079 "cntlid": 93, 00:18:37.079 "qid": 0, 00:18:37.079 "state": "enabled", 00:18:37.079 "thread": "nvmf_tgt_poll_group_000", 00:18:37.079 "listen_address": { 00:18:37.079 "trtype": "RDMA", 00:18:37.079 "adrfam": "IPv4", 00:18:37.079 "traddr": "192.168.100.8", 00:18:37.079 "trsvcid": "4420" 00:18:37.079 }, 00:18:37.079 "peer_address": { 00:18:37.079 "trtype": "RDMA", 00:18:37.079 "adrfam": "IPv4", 00:18:37.079 "traddr": "192.168.100.8", 00:18:37.079 "trsvcid": "60835" 00:18:37.079 }, 00:18:37.079 "auth": { 00:18:37.079 "state": "completed", 00:18:37.079 "digest": "sha384", 00:18:37.079 "dhgroup": "ffdhe8192" 00:18:37.079 } 00:18:37.079 } 00:18:37.079 ]' 00:18:37.079 19:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.079 19:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:37.079 19:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.079 19:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:37.079 19:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.079 19:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.079 19:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.079 19:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.337 19:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmExNGVkZDYzM2U2MjU0MjJhNGVkY2Y0NDBlNTU1ZWNhNzg0Y2Y4OTMzMzk0ODNhHXNO5Q==: --dhchap-ctrl-secret DHHC-1:01:NzJkNDRkZmY5YjRlYjQwMmZkMDRjNjkxY2Y0ODI1MTfV3eFz: 00:18:38.271 19:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.271 19:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:18:38.272 19:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.272 19:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.272 19:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.272 19:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.272 19:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:38.272 19:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:38.531 19:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:18:38.531 19:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.531 19:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:38.531 19:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:38.531 19:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:38.531 19:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.531 19:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:18:38.531 19:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.531 19:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.531 19:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.531 19:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:38.532 19:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:39.099 00:18:39.099 19:09:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.099 19:09:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.099 19:09:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.099 19:09:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.357 19:09:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.357 19:09:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.358 19:09:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.358 19:09:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.358 19:09:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.358 { 00:18:39.358 "cntlid": 95, 00:18:39.358 "qid": 0, 00:18:39.358 "state": "enabled", 00:18:39.358 "thread": "nvmf_tgt_poll_group_000", 00:18:39.358 "listen_address": { 00:18:39.358 "trtype": "RDMA", 00:18:39.358 "adrfam": "IPv4", 00:18:39.358 "traddr": "192.168.100.8", 00:18:39.358 "trsvcid": "4420" 00:18:39.358 }, 00:18:39.358 "peer_address": { 00:18:39.358 "trtype": "RDMA", 00:18:39.358 "adrfam": "IPv4", 00:18:39.358 "traddr": "192.168.100.8", 00:18:39.358 "trsvcid": "59316" 00:18:39.358 }, 00:18:39.358 "auth": { 00:18:39.358 "state": "completed", 00:18:39.358 "digest": "sha384", 00:18:39.358 "dhgroup": "ffdhe8192" 00:18:39.358 } 00:18:39.358 } 00:18:39.358 ]' 00:18:39.358 19:09:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.358 19:09:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:39.358 19:09:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.358 19:09:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:39.358 19:09:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.358 19:09:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.358 19:09:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.358 19:09:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.616 19:09:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTI5ZWUwNDhmZTkwMDdhNDAwMzMwNzc4MTBiM2E2NjFhZjRjZDExNTUyOThhYTJjOTM2ZmZhNDYxZTY3NWQ5NLD7Guo=: 00:18:40.552 19:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.552 19:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:18:40.552 19:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.552 19:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.552 19:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.552 19:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:40.552 19:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:40.552 19:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.552 19:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:40.552 19:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:40.811 19:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:18:40.811 19:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.811 19:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:40.811 19:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:40.811 19:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:40.811 19:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.811 19:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.811 19:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.811 19:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.811 19:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.811 19:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.811 19:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.074 00:18:41.074 19:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:41.074 19:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.074 19:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.331 19:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.331 19:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.331 19:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.331 19:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.331 19:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.331 19:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.331 { 00:18:41.331 "cntlid": 97, 00:18:41.331 "qid": 0, 00:18:41.331 "state": "enabled", 00:18:41.331 "thread": "nvmf_tgt_poll_group_000", 00:18:41.331 "listen_address": { 00:18:41.331 "trtype": "RDMA", 00:18:41.331 "adrfam": "IPv4", 00:18:41.331 "traddr": "192.168.100.8", 00:18:41.331 "trsvcid": "4420" 00:18:41.331 }, 00:18:41.331 "peer_address": { 00:18:41.331 "trtype": "RDMA", 00:18:41.331 "adrfam": "IPv4", 00:18:41.331 "traddr": "192.168.100.8", 00:18:41.331 "trsvcid": "49113" 00:18:41.331 }, 00:18:41.331 "auth": { 00:18:41.331 "state": "completed", 00:18:41.331 "digest": "sha512", 00:18:41.331 "dhgroup": "null" 00:18:41.331 } 00:18:41.331 } 00:18:41.331 ]' 00:18:41.332 19:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.332 19:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:41.332 19:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.332 19:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:41.332 19:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.332 19:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.332 19:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.332 19:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.590 19:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTc4NjA1NzI5NjMyOGM0MjI2NTliZGM4NTk0ZjdiMDEwZmFjNWZiNGEwYzRhODk5Vo/hKA==: --dhchap-ctrl-secret DHHC-1:03:NWFmMDhmYjE4OTA2MmE0ZGYxZGU2NzljNDI4ZjRmOGE2NTljMDdmNDJlYjhmYzMwNzA3OTE3YThiNDJlNjVjM8S0ot4=: 00:18:42.526 19:09:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.526 19:09:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:18:42.526 19:09:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.526 19:09:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.526 19:09:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.526 19:09:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.526 19:09:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:42.527 19:09:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:42.786 19:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:18:42.786 19:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.786 19:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:42.786 19:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:42.786 19:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:42.786 19:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.786 19:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.786 19:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.786 19:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.786 19:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.786 19:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.786 19:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.044 00:18:43.044 19:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.044 19:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.044 19:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.302 19:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.302 19:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.302 19:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.302 19:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.302 19:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.302 19:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.302 { 00:18:43.302 "cntlid": 99, 00:18:43.302 "qid": 0, 00:18:43.302 "state": "enabled", 00:18:43.302 "thread": "nvmf_tgt_poll_group_000", 00:18:43.302 "listen_address": { 00:18:43.302 "trtype": "RDMA", 00:18:43.302 "adrfam": "IPv4", 00:18:43.302 "traddr": "192.168.100.8", 00:18:43.302 "trsvcid": "4420" 00:18:43.302 }, 00:18:43.302 "peer_address": { 00:18:43.302 "trtype": "RDMA", 00:18:43.302 "adrfam": "IPv4", 00:18:43.302 "traddr": "192.168.100.8", 00:18:43.302 "trsvcid": "38307" 00:18:43.302 }, 00:18:43.302 "auth": { 00:18:43.302 "state": "completed", 00:18:43.302 "digest": "sha512", 00:18:43.302 "dhgroup": "null" 00:18:43.302 } 00:18:43.302 } 00:18:43.302 ]' 00:18:43.302 19:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.302 19:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:43.302 19:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.302 19:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:43.302 19:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.302 19:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.302 19:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.302 19:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.561 19:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTM0MDRmNDBhNWNlNmM4MjJjNWRiODY2Mzg1OWM0MWHhFORF: --dhchap-ctrl-secret DHHC-1:02:MjQwZDEzMjkzYTg4M2Y3YzQ1Y2FjZmY3NjE0N2UwOGI0OGYwM2NmYWJjN2Y1ZTYyEjfwJQ==: 00:18:44.497 19:09:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.497 19:09:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:18:44.497 19:09:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.497 19:09:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.497 19:09:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.497 19:09:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.497 19:09:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:44.497 19:09:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:44.756 19:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:18:44.756 19:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.756 19:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:44.756 19:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:44.756 19:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:44.756 19:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.756 19:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.756 19:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.756 19:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.756 19:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.756 19:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.756 19:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.015 00:18:45.015 19:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.015 19:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.015 19:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.275 19:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.275 19:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.275 19:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.275 19:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.275 19:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.275 19:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.275 { 00:18:45.275 "cntlid": 101, 00:18:45.275 "qid": 0, 00:18:45.275 "state": "enabled", 00:18:45.275 "thread": "nvmf_tgt_poll_group_000", 00:18:45.275 "listen_address": { 00:18:45.275 "trtype": "RDMA", 00:18:45.275 "adrfam": "IPv4", 00:18:45.275 "traddr": "192.168.100.8", 00:18:45.275 "trsvcid": "4420" 00:18:45.275 }, 00:18:45.275 "peer_address": { 00:18:45.275 "trtype": "RDMA", 00:18:45.275 "adrfam": "IPv4", 00:18:45.275 "traddr": "192.168.100.8", 00:18:45.275 "trsvcid": "48173" 00:18:45.275 }, 00:18:45.275 "auth": { 00:18:45.275 "state": "completed", 00:18:45.275 "digest": "sha512", 00:18:45.275 "dhgroup": "null" 00:18:45.275 } 00:18:45.275 } 00:18:45.275 ]' 00:18:45.275 19:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.275 19:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:45.275 19:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.275 19:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:45.275 19:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.275 19:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.275 19:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.275 19:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.534 19:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmExNGVkZDYzM2U2MjU0MjJhNGVkY2Y0NDBlNTU1ZWNhNzg0Y2Y4OTMzMzk0ODNhHXNO5Q==: --dhchap-ctrl-secret DHHC-1:01:NzJkNDRkZmY5YjRlYjQwMmZkMDRjNjkxY2Y0ODI1MTfV3eFz: 00:18:46.470 19:09:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.470 19:09:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:18:46.470 19:09:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.470 19:09:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.470 19:09:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.470 19:09:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.470 19:09:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:46.470 19:09:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:46.729 19:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:18:46.729 19:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.729 19:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:46.729 19:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:46.729 19:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:46.729 19:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.729 19:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:18:46.729 19:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.729 19:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.729 19:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.729 19:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:46.729 19:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:46.988 00:18:46.988 19:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:46.988 19:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:46.988 19:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.246 19:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.246 19:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.246 19:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.246 19:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.246 19:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.246 19:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.246 { 00:18:47.246 "cntlid": 103, 00:18:47.246 "qid": 0, 00:18:47.246 "state": "enabled", 00:18:47.246 "thread": "nvmf_tgt_poll_group_000", 00:18:47.246 "listen_address": { 00:18:47.246 "trtype": "RDMA", 00:18:47.246 "adrfam": "IPv4", 00:18:47.246 "traddr": "192.168.100.8", 00:18:47.246 "trsvcid": "4420" 00:18:47.246 }, 00:18:47.246 "peer_address": { 00:18:47.246 "trtype": "RDMA", 00:18:47.246 "adrfam": "IPv4", 00:18:47.246 "traddr": "192.168.100.8", 00:18:47.246 "trsvcid": "46398" 00:18:47.246 }, 00:18:47.246 "auth": { 00:18:47.246 "state": "completed", 00:18:47.246 "digest": "sha512", 00:18:47.247 "dhgroup": "null" 00:18:47.247 } 00:18:47.247 } 00:18:47.247 ]' 00:18:47.247 19:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.247 19:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:47.247 19:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.247 19:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:47.247 19:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.247 19:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.247 19:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.247 19:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.505 19:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTI5ZWUwNDhmZTkwMDdhNDAwMzMwNzc4MTBiM2E2NjFhZjRjZDExNTUyOThhYTJjOTM2ZmZhNDYxZTY3NWQ5NLD7Guo=: 00:18:48.441 19:09:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.441 19:09:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:18:48.441 19:09:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.441 19:09:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.441 19:09:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.441 19:09:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:48.441 19:09:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.441 19:09:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:48.441 19:09:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:48.699 19:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:18:48.699 19:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.699 19:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:48.699 19:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:48.699 19:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:48.699 19:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.699 19:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.699 19:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.699 19:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.699 19:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.699 19:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.699 19:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.957 00:18:48.957 19:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.957 19:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.957 19:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.214 19:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.214 19:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.214 19:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.214 19:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.214 19:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.214 19:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.214 { 00:18:49.214 "cntlid": 105, 00:18:49.214 "qid": 0, 00:18:49.214 "state": "enabled", 00:18:49.214 "thread": "nvmf_tgt_poll_group_000", 00:18:49.214 "listen_address": { 00:18:49.214 "trtype": "RDMA", 00:18:49.214 "adrfam": "IPv4", 00:18:49.214 "traddr": "192.168.100.8", 00:18:49.214 "trsvcid": "4420" 00:18:49.214 }, 00:18:49.214 "peer_address": { 00:18:49.214 "trtype": "RDMA", 00:18:49.214 "adrfam": "IPv4", 00:18:49.214 "traddr": "192.168.100.8", 00:18:49.214 "trsvcid": "35008" 00:18:49.214 }, 00:18:49.214 "auth": { 00:18:49.214 "state": "completed", 00:18:49.214 "digest": "sha512", 00:18:49.214 "dhgroup": "ffdhe2048" 00:18:49.214 } 00:18:49.214 } 00:18:49.214 ]' 00:18:49.214 19:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.214 19:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:49.214 19:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.214 19:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:49.214 19:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.215 19:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.215 19:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.215 19:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.473 19:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTc4NjA1NzI5NjMyOGM0MjI2NTliZGM4NTk0ZjdiMDEwZmFjNWZiNGEwYzRhODk5Vo/hKA==: --dhchap-ctrl-secret DHHC-1:03:NWFmMDhmYjE4OTA2MmE0ZGYxZGU2NzljNDI4ZjRmOGE2NTljMDdmNDJlYjhmYzMwNzA3OTE3YThiNDJlNjVjM8S0ot4=: 00:18:50.409 19:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.409 19:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:18:50.409 19:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.409 19:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.409 19:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.409 19:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.409 19:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:50.409 19:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:50.667 19:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:18:50.667 19:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.667 19:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:50.667 19:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:50.667 19:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:50.667 19:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.667 19:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.667 19:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.667 19:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.667 19:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.667 19:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.667 19:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.926 00:18:50.926 19:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.926 19:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.926 19:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.184 19:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.184 19:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.184 19:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.184 19:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.184 19:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.184 19:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.184 { 00:18:51.184 "cntlid": 107, 00:18:51.184 "qid": 0, 00:18:51.184 "state": "enabled", 00:18:51.184 "thread": "nvmf_tgt_poll_group_000", 00:18:51.184 "listen_address": { 00:18:51.184 "trtype": "RDMA", 00:18:51.184 "adrfam": "IPv4", 00:18:51.184 "traddr": "192.168.100.8", 00:18:51.184 "trsvcid": "4420" 00:18:51.184 }, 00:18:51.184 "peer_address": { 00:18:51.184 "trtype": "RDMA", 00:18:51.184 "adrfam": "IPv4", 00:18:51.184 "traddr": "192.168.100.8", 00:18:51.184 "trsvcid": "50429" 00:18:51.184 }, 00:18:51.184 "auth": { 00:18:51.184 "state": "completed", 00:18:51.184 "digest": "sha512", 00:18:51.184 "dhgroup": "ffdhe2048" 00:18:51.184 } 00:18:51.184 } 00:18:51.184 ]' 00:18:51.184 19:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.184 19:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:51.184 19:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.184 19:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:51.184 19:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.443 19:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.443 19:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.443 19:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.443 19:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTM0MDRmNDBhNWNlNmM4MjJjNWRiODY2Mzg1OWM0MWHhFORF: --dhchap-ctrl-secret DHHC-1:02:MjQwZDEzMjkzYTg4M2Y3YzQ1Y2FjZmY3NjE0N2UwOGI0OGYwM2NmYWJjN2Y1ZTYyEjfwJQ==: 00:18:52.380 19:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.639 19:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:18:52.639 19:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.639 19:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.639 19:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.639 19:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.639 19:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:52.639 19:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:52.639 19:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:18:52.639 19:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.639 19:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:52.639 19:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:52.639 19:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:52.639 19:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.639 19:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.639 19:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.639 19:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.639 19:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.639 19:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.639 19:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.898 00:18:52.898 19:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.898 19:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.898 19:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.157 19:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.157 19:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.157 19:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.157 19:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.157 19:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.157 19:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.157 { 00:18:53.157 "cntlid": 109, 00:18:53.157 "qid": 0, 00:18:53.157 "state": "enabled", 00:18:53.157 "thread": "nvmf_tgt_poll_group_000", 00:18:53.157 "listen_address": { 00:18:53.157 "trtype": "RDMA", 00:18:53.157 "adrfam": "IPv4", 00:18:53.157 "traddr": "192.168.100.8", 00:18:53.157 "trsvcid": "4420" 00:18:53.157 }, 00:18:53.157 "peer_address": { 00:18:53.157 "trtype": "RDMA", 00:18:53.157 "adrfam": "IPv4", 00:18:53.157 "traddr": "192.168.100.8", 00:18:53.157 "trsvcid": "46147" 00:18:53.157 }, 00:18:53.157 "auth": { 00:18:53.157 "state": "completed", 00:18:53.157 "digest": "sha512", 00:18:53.157 "dhgroup": "ffdhe2048" 00:18:53.157 } 00:18:53.157 } 00:18:53.157 ]' 00:18:53.157 19:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.157 19:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:53.157 19:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.157 19:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:53.416 19:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:53.416 19:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.416 19:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.416 19:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.416 19:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmExNGVkZDYzM2U2MjU0MjJhNGVkY2Y0NDBlNTU1ZWNhNzg0Y2Y4OTMzMzk0ODNhHXNO5Q==: --dhchap-ctrl-secret DHHC-1:01:NzJkNDRkZmY5YjRlYjQwMmZkMDRjNjkxY2Y0ODI1MTfV3eFz: 00:18:54.353 19:09:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.612 19:09:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:18:54.612 19:09:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.612 19:09:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.612 19:09:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.612 19:09:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.612 19:09:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:54.612 19:09:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:54.612 19:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:18:54.612 19:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.612 19:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:54.612 19:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:54.612 19:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:54.612 19:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.612 19:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:18:54.612 19:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.612 19:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.612 19:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.612 19:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:54.612 19:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:54.870 00:18:54.870 19:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.870 19:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.870 19:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.130 19:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.130 19:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.130 19:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.130 19:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.130 19:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.130 19:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.130 { 00:18:55.130 "cntlid": 111, 00:18:55.130 "qid": 0, 00:18:55.130 "state": "enabled", 00:18:55.130 "thread": "nvmf_tgt_poll_group_000", 00:18:55.130 "listen_address": { 00:18:55.130 "trtype": "RDMA", 00:18:55.130 "adrfam": "IPv4", 00:18:55.130 "traddr": "192.168.100.8", 00:18:55.130 "trsvcid": "4420" 00:18:55.130 }, 00:18:55.130 "peer_address": { 00:18:55.130 "trtype": "RDMA", 00:18:55.130 "adrfam": "IPv4", 00:18:55.130 "traddr": "192.168.100.8", 00:18:55.130 "trsvcid": "45268" 00:18:55.130 }, 00:18:55.130 "auth": { 00:18:55.130 "state": "completed", 00:18:55.130 "digest": "sha512", 00:18:55.130 "dhgroup": "ffdhe2048" 00:18:55.130 } 00:18:55.130 } 00:18:55.130 ]' 00:18:55.130 19:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.130 19:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:55.130 19:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.389 19:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:55.389 19:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.389 19:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.389 19:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.389 19:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.389 19:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTI5ZWUwNDhmZTkwMDdhNDAwMzMwNzc4MTBiM2E2NjFhZjRjZDExNTUyOThhYTJjOTM2ZmZhNDYxZTY3NWQ5NLD7Guo=: 00:18:56.324 19:09:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.582 19:09:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:18:56.582 19:09:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.582 19:09:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.582 19:09:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.582 19:09:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:56.582 19:09:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.582 19:09:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:56.582 19:09:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:56.582 19:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:18:56.582 19:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.582 19:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:56.582 19:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:56.583 19:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:56.583 19:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.583 19:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.583 19:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.583 19:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.583 19:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.583 19:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.583 19:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.841 00:18:57.100 19:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.100 19:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.100 19:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.100 19:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.100 19:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.100 19:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.100 19:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.100 19:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.100 19:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.100 { 00:18:57.100 "cntlid": 113, 00:18:57.100 "qid": 0, 00:18:57.100 "state": "enabled", 00:18:57.100 "thread": "nvmf_tgt_poll_group_000", 00:18:57.100 "listen_address": { 00:18:57.100 "trtype": "RDMA", 00:18:57.100 "adrfam": "IPv4", 00:18:57.100 "traddr": "192.168.100.8", 00:18:57.100 "trsvcid": "4420" 00:18:57.100 }, 00:18:57.100 "peer_address": { 00:18:57.100 "trtype": "RDMA", 00:18:57.100 "adrfam": "IPv4", 00:18:57.100 "traddr": "192.168.100.8", 00:18:57.100 "trsvcid": "43055" 00:18:57.100 }, 00:18:57.100 "auth": { 00:18:57.100 "state": "completed", 00:18:57.100 "digest": "sha512", 00:18:57.100 "dhgroup": "ffdhe3072" 00:18:57.100 } 00:18:57.100 } 00:18:57.100 ]' 00:18:57.100 19:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.100 19:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:57.100 19:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.358 19:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:57.358 19:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.358 19:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.358 19:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.358 19:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.617 19:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTc4NjA1NzI5NjMyOGM0MjI2NTliZGM4NTk0ZjdiMDEwZmFjNWZiNGEwYzRhODk5Vo/hKA==: --dhchap-ctrl-secret DHHC-1:03:NWFmMDhmYjE4OTA2MmE0ZGYxZGU2NzljNDI4ZjRmOGE2NTljMDdmNDJlYjhmYzMwNzA3OTE3YThiNDJlNjVjM8S0ot4=: 00:18:58.183 19:09:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.442 19:09:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:18:58.442 19:09:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.442 19:09:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.442 19:09:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.442 19:09:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.442 19:09:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:58.442 19:09:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:58.701 19:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:18:58.701 19:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.701 19:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:58.701 19:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:58.701 19:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:58.701 19:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.701 19:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.701 19:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.701 19:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.701 19:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.701 19:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.701 19:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.960 00:18:58.960 19:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.960 19:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.960 19:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.219 19:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.219 19:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.219 19:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.219 19:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.219 19:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.219 19:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.219 { 00:18:59.219 "cntlid": 115, 00:18:59.219 "qid": 0, 00:18:59.219 "state": "enabled", 00:18:59.219 "thread": "nvmf_tgt_poll_group_000", 00:18:59.219 "listen_address": { 00:18:59.219 "trtype": "RDMA", 00:18:59.219 "adrfam": "IPv4", 00:18:59.219 "traddr": "192.168.100.8", 00:18:59.219 "trsvcid": "4420" 00:18:59.219 }, 00:18:59.219 "peer_address": { 00:18:59.219 "trtype": "RDMA", 00:18:59.219 "adrfam": "IPv4", 00:18:59.219 "traddr": "192.168.100.8", 00:18:59.219 "trsvcid": "33350" 00:18:59.219 }, 00:18:59.219 "auth": { 00:18:59.219 "state": "completed", 00:18:59.219 "digest": "sha512", 00:18:59.219 "dhgroup": "ffdhe3072" 00:18:59.219 } 00:18:59.219 } 00:18:59.219 ]' 00:18:59.219 19:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.219 19:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:59.219 19:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.219 19:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:59.219 19:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.219 19:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.219 19:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.219 19:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.477 19:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTM0MDRmNDBhNWNlNmM4MjJjNWRiODY2Mzg1OWM0MWHhFORF: --dhchap-ctrl-secret DHHC-1:02:MjQwZDEzMjkzYTg4M2Y3YzQ1Y2FjZmY3NjE0N2UwOGI0OGYwM2NmYWJjN2Y1ZTYyEjfwJQ==: 00:19:00.414 19:09:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.414 19:09:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:19:00.414 19:09:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.414 19:09:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.414 19:09:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.414 19:09:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.414 19:09:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:00.414 19:09:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:00.672 19:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:00.672 19:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.672 19:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:00.672 19:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:00.673 19:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:00.673 19:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.673 19:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.673 19:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.673 19:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.673 19:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.673 19:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.673 19:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.931 00:19:00.931 19:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.931 19:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.931 19:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.189 19:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.189 19:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.189 19:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.189 19:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.189 19:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.189 19:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.189 { 00:19:01.189 "cntlid": 117, 00:19:01.189 "qid": 0, 00:19:01.189 "state": "enabled", 00:19:01.189 "thread": "nvmf_tgt_poll_group_000", 00:19:01.189 "listen_address": { 00:19:01.189 "trtype": "RDMA", 00:19:01.189 "adrfam": "IPv4", 00:19:01.189 "traddr": "192.168.100.8", 00:19:01.189 "trsvcid": "4420" 00:19:01.189 }, 00:19:01.189 "peer_address": { 00:19:01.189 "trtype": "RDMA", 00:19:01.189 "adrfam": "IPv4", 00:19:01.189 "traddr": "192.168.100.8", 00:19:01.189 "trsvcid": "43909" 00:19:01.189 }, 00:19:01.189 "auth": { 00:19:01.189 "state": "completed", 00:19:01.189 "digest": "sha512", 00:19:01.189 "dhgroup": "ffdhe3072" 00:19:01.190 } 00:19:01.190 } 00:19:01.190 ]' 00:19:01.190 19:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.190 19:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:01.190 19:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.190 19:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:01.190 19:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.190 19:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.190 19:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.190 19:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.447 19:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmExNGVkZDYzM2U2MjU0MjJhNGVkY2Y0NDBlNTU1ZWNhNzg0Y2Y4OTMzMzk0ODNhHXNO5Q==: --dhchap-ctrl-secret DHHC-1:01:NzJkNDRkZmY5YjRlYjQwMmZkMDRjNjkxY2Y0ODI1MTfV3eFz: 00:19:02.381 19:09:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.640 19:09:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:19:02.640 19:09:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.640 19:09:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.640 19:09:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.640 19:09:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.640 19:09:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:02.640 19:09:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:02.640 19:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:02.640 19:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.640 19:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:02.640 19:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:02.640 19:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:02.640 19:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.640 19:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:19:02.640 19:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.640 19:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.640 19:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.640 19:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:02.640 19:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:02.898 00:19:02.898 19:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.898 19:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.898 19:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.157 19:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.157 19:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.157 19:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.157 19:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.157 19:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.157 19:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.157 { 00:19:03.157 "cntlid": 119, 00:19:03.157 "qid": 0, 00:19:03.157 "state": "enabled", 00:19:03.157 "thread": "nvmf_tgt_poll_group_000", 00:19:03.157 "listen_address": { 00:19:03.157 "trtype": "RDMA", 00:19:03.157 "adrfam": "IPv4", 00:19:03.157 "traddr": "192.168.100.8", 00:19:03.157 "trsvcid": "4420" 00:19:03.157 }, 00:19:03.157 "peer_address": { 00:19:03.157 "trtype": "RDMA", 00:19:03.157 "adrfam": "IPv4", 00:19:03.157 "traddr": "192.168.100.8", 00:19:03.157 "trsvcid": "47790" 00:19:03.157 }, 00:19:03.157 "auth": { 00:19:03.157 "state": "completed", 00:19:03.157 "digest": "sha512", 00:19:03.157 "dhgroup": "ffdhe3072" 00:19:03.157 } 00:19:03.157 } 00:19:03.157 ]' 00:19:03.157 19:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.157 19:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:03.157 19:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.416 19:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:03.416 19:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.416 19:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.416 19:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.416 19:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.674 19:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTI5ZWUwNDhmZTkwMDdhNDAwMzMwNzc4MTBiM2E2NjFhZjRjZDExNTUyOThhYTJjOTM2ZmZhNDYxZTY3NWQ5NLD7Guo=: 00:19:04.241 19:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.500 19:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:19:04.500 19:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.500 19:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.500 19:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.500 19:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:04.500 19:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.500 19:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:04.500 19:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:04.758 19:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:04.758 19:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.758 19:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:04.758 19:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:04.758 19:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:04.758 19:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.758 19:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.758 19:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.758 19:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.758 19:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.758 19:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.758 19:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.017 00:19:05.017 19:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.017 19:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.017 19:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.276 19:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.276 19:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.276 19:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.276 19:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.276 19:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.276 19:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.276 { 00:19:05.276 "cntlid": 121, 00:19:05.276 "qid": 0, 00:19:05.276 "state": "enabled", 00:19:05.276 "thread": "nvmf_tgt_poll_group_000", 00:19:05.276 "listen_address": { 00:19:05.276 "trtype": "RDMA", 00:19:05.276 "adrfam": "IPv4", 00:19:05.276 "traddr": "192.168.100.8", 00:19:05.276 "trsvcid": "4420" 00:19:05.276 }, 00:19:05.276 "peer_address": { 00:19:05.276 "trtype": "RDMA", 00:19:05.276 "adrfam": "IPv4", 00:19:05.276 "traddr": "192.168.100.8", 00:19:05.276 "trsvcid": "51931" 00:19:05.276 }, 00:19:05.276 "auth": { 00:19:05.276 "state": "completed", 00:19:05.276 "digest": "sha512", 00:19:05.276 "dhgroup": "ffdhe4096" 00:19:05.276 } 00:19:05.276 } 00:19:05.276 ]' 00:19:05.276 19:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.276 19:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.276 19:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.276 19:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:05.276 19:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.276 19:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.276 19:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.276 19:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.534 19:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTc4NjA1NzI5NjMyOGM0MjI2NTliZGM4NTk0ZjdiMDEwZmFjNWZiNGEwYzRhODk5Vo/hKA==: --dhchap-ctrl-secret DHHC-1:03:NWFmMDhmYjE4OTA2MmE0ZGYxZGU2NzljNDI4ZjRmOGE2NTljMDdmNDJlYjhmYzMwNzA3OTE3YThiNDJlNjVjM8S0ot4=: 00:19:06.469 19:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.469 19:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:19:06.469 19:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.469 19:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.727 19:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.727 19:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.728 19:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:06.728 19:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:06.728 19:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:06.728 19:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.728 19:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:06.728 19:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:06.728 19:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:06.728 19:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.728 19:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.728 19:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.728 19:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.728 19:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.728 19:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.728 19:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.986 00:19:06.986 19:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.986 19:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.986 19:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.245 19:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.245 19:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.245 19:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.245 19:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.245 19:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.245 19:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.245 { 00:19:07.245 "cntlid": 123, 00:19:07.245 "qid": 0, 00:19:07.245 "state": "enabled", 00:19:07.245 "thread": "nvmf_tgt_poll_group_000", 00:19:07.245 "listen_address": { 00:19:07.245 "trtype": "RDMA", 00:19:07.245 "adrfam": "IPv4", 00:19:07.245 "traddr": "192.168.100.8", 00:19:07.245 "trsvcid": "4420" 00:19:07.245 }, 00:19:07.245 "peer_address": { 00:19:07.245 "trtype": "RDMA", 00:19:07.245 "adrfam": "IPv4", 00:19:07.245 "traddr": "192.168.100.8", 00:19:07.245 "trsvcid": "45048" 00:19:07.245 }, 00:19:07.245 "auth": { 00:19:07.245 "state": "completed", 00:19:07.245 "digest": "sha512", 00:19:07.245 "dhgroup": "ffdhe4096" 00:19:07.245 } 00:19:07.245 } 00:19:07.245 ]' 00:19:07.246 19:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.246 19:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:07.246 19:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.246 19:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:07.246 19:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.505 19:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.505 19:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.505 19:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.505 19:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTM0MDRmNDBhNWNlNmM4MjJjNWRiODY2Mzg1OWM0MWHhFORF: --dhchap-ctrl-secret DHHC-1:02:MjQwZDEzMjkzYTg4M2Y3YzQ1Y2FjZmY3NjE0N2UwOGI0OGYwM2NmYWJjN2Y1ZTYyEjfwJQ==: 00:19:08.441 19:10:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.700 19:10:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:19:08.700 19:10:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.700 19:10:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.700 19:10:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.700 19:10:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.700 19:10:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:08.700 19:10:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:08.700 19:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:08.700 19:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.700 19:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:08.700 19:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:08.700 19:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:08.700 19:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.700 19:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.700 19:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.700 19:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.700 19:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.700 19:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.700 19:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.266 00:19:09.266 19:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.266 19:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.266 19:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.266 19:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.266 19:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.266 19:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.266 19:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.266 19:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.266 19:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.266 { 00:19:09.266 "cntlid": 125, 00:19:09.266 "qid": 0, 00:19:09.266 "state": "enabled", 00:19:09.266 "thread": "nvmf_tgt_poll_group_000", 00:19:09.266 "listen_address": { 00:19:09.266 "trtype": "RDMA", 00:19:09.266 "adrfam": "IPv4", 00:19:09.266 "traddr": "192.168.100.8", 00:19:09.266 "trsvcid": "4420" 00:19:09.266 }, 00:19:09.266 "peer_address": { 00:19:09.266 "trtype": "RDMA", 00:19:09.266 "adrfam": "IPv4", 00:19:09.266 "traddr": "192.168.100.8", 00:19:09.266 "trsvcid": "33943" 00:19:09.266 }, 00:19:09.266 "auth": { 00:19:09.266 "state": "completed", 00:19:09.266 "digest": "sha512", 00:19:09.266 "dhgroup": "ffdhe4096" 00:19:09.266 } 00:19:09.266 } 00:19:09.266 ]' 00:19:09.266 19:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.266 19:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:09.266 19:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.524 19:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:09.524 19:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.524 19:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.524 19:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.525 19:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.525 19:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmExNGVkZDYzM2U2MjU0MjJhNGVkY2Y0NDBlNTU1ZWNhNzg0Y2Y4OTMzMzk0ODNhHXNO5Q==: --dhchap-ctrl-secret DHHC-1:01:NzJkNDRkZmY5YjRlYjQwMmZkMDRjNjkxY2Y0ODI1MTfV3eFz: 00:19:10.457 19:10:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.715 19:10:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:19:10.715 19:10:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.715 19:10:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.715 19:10:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.715 19:10:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.715 19:10:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:10.715 19:10:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:10.715 19:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:10.715 19:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.715 19:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:10.715 19:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:10.715 19:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:10.715 19:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.715 19:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:19:10.715 19:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.715 19:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.715 19:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.715 19:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:10.715 19:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:11.282 00:19:11.282 19:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.282 19:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.282 19:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.282 19:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.282 19:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.282 19:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.282 19:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.282 19:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.282 19:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.282 { 00:19:11.282 "cntlid": 127, 00:19:11.282 "qid": 0, 00:19:11.282 "state": "enabled", 00:19:11.282 "thread": "nvmf_tgt_poll_group_000", 00:19:11.282 "listen_address": { 00:19:11.282 "trtype": "RDMA", 00:19:11.282 "adrfam": "IPv4", 00:19:11.282 "traddr": "192.168.100.8", 00:19:11.282 "trsvcid": "4420" 00:19:11.282 }, 00:19:11.282 "peer_address": { 00:19:11.282 "trtype": "RDMA", 00:19:11.282 "adrfam": "IPv4", 00:19:11.282 "traddr": "192.168.100.8", 00:19:11.282 "trsvcid": "37723" 00:19:11.282 }, 00:19:11.282 "auth": { 00:19:11.282 "state": "completed", 00:19:11.282 "digest": "sha512", 00:19:11.282 "dhgroup": "ffdhe4096" 00:19:11.282 } 00:19:11.282 } 00:19:11.282 ]' 00:19:11.282 19:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.282 19:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.282 19:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.282 19:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:11.282 19:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.541 19:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.541 19:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.541 19:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.541 19:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTI5ZWUwNDhmZTkwMDdhNDAwMzMwNzc4MTBiM2E2NjFhZjRjZDExNTUyOThhYTJjOTM2ZmZhNDYxZTY3NWQ5NLD7Guo=: 00:19:12.476 19:10:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.735 19:10:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:19:12.735 19:10:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.735 19:10:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.735 19:10:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.735 19:10:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:12.735 19:10:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.735 19:10:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:12.735 19:10:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:12.735 19:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:12.735 19:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.735 19:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:12.735 19:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:12.735 19:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:12.735 19:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.735 19:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.735 19:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.735 19:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.735 19:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.735 19:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.735 19:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.302 00:19:13.302 19:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.302 19:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.302 19:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.302 19:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.302 19:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.302 19:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.302 19:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.561 19:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.561 19:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.561 { 00:19:13.561 "cntlid": 129, 00:19:13.561 "qid": 0, 00:19:13.561 "state": "enabled", 00:19:13.561 "thread": "nvmf_tgt_poll_group_000", 00:19:13.561 "listen_address": { 00:19:13.561 "trtype": "RDMA", 00:19:13.561 "adrfam": "IPv4", 00:19:13.561 "traddr": "192.168.100.8", 00:19:13.561 "trsvcid": "4420" 00:19:13.561 }, 00:19:13.561 "peer_address": { 00:19:13.561 "trtype": "RDMA", 00:19:13.561 "adrfam": "IPv4", 00:19:13.561 "traddr": "192.168.100.8", 00:19:13.561 "trsvcid": "51123" 00:19:13.561 }, 00:19:13.561 "auth": { 00:19:13.561 "state": "completed", 00:19:13.561 "digest": "sha512", 00:19:13.561 "dhgroup": "ffdhe6144" 00:19:13.561 } 00:19:13.561 } 00:19:13.561 ]' 00:19:13.561 19:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.561 19:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:13.561 19:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.561 19:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:13.561 19:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.561 19:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.561 19:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.561 19:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.819 19:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTc4NjA1NzI5NjMyOGM0MjI2NTliZGM4NTk0ZjdiMDEwZmFjNWZiNGEwYzRhODk5Vo/hKA==: --dhchap-ctrl-secret DHHC-1:03:NWFmMDhmYjE4OTA2MmE0ZGYxZGU2NzljNDI4ZjRmOGE2NTljMDdmNDJlYjhmYzMwNzA3OTE3YThiNDJlNjVjM8S0ot4=: 00:19:14.754 19:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.754 19:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:19:14.754 19:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.754 19:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.754 19:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.754 19:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.754 19:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:14.754 19:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:15.013 19:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:15.013 19:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.013 19:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:15.013 19:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:15.013 19:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:15.013 19:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.013 19:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.013 19:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.013 19:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.013 19:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.013 19:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.013 19:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.271 00:19:15.271 19:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.271 19:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.271 19:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.529 19:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.529 19:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.529 19:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.529 19:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.529 19:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.529 19:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.529 { 00:19:15.529 "cntlid": 131, 00:19:15.529 "qid": 0, 00:19:15.529 "state": "enabled", 00:19:15.529 "thread": "nvmf_tgt_poll_group_000", 00:19:15.529 "listen_address": { 00:19:15.529 "trtype": "RDMA", 00:19:15.529 "adrfam": "IPv4", 00:19:15.529 "traddr": "192.168.100.8", 00:19:15.529 "trsvcid": "4420" 00:19:15.529 }, 00:19:15.529 "peer_address": { 00:19:15.529 "trtype": "RDMA", 00:19:15.529 "adrfam": "IPv4", 00:19:15.529 "traddr": "192.168.100.8", 00:19:15.529 "trsvcid": "50645" 00:19:15.529 }, 00:19:15.529 "auth": { 00:19:15.529 "state": "completed", 00:19:15.529 "digest": "sha512", 00:19:15.529 "dhgroup": "ffdhe6144" 00:19:15.529 } 00:19:15.529 } 00:19:15.529 ]' 00:19:15.529 19:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.529 19:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.529 19:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.529 19:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:15.529 19:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.788 19:10:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.788 19:10:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.788 19:10:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.788 19:10:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTM0MDRmNDBhNWNlNmM4MjJjNWRiODY2Mzg1OWM0MWHhFORF: --dhchap-ctrl-secret DHHC-1:02:MjQwZDEzMjkzYTg4M2Y3YzQ1Y2FjZmY3NjE0N2UwOGI0OGYwM2NmYWJjN2Y1ZTYyEjfwJQ==: 00:19:16.726 19:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.986 19:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:19:16.986 19:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.986 19:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.986 19:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.986 19:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.986 19:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:16.986 19:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:16.986 19:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:16.986 19:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.986 19:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:16.986 19:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:16.986 19:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:16.986 19:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.986 19:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.986 19:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.986 19:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.986 19:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.986 19:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.986 19:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.553 00:19:17.553 19:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.553 19:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.553 19:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.553 19:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.553 19:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.553 19:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.553 19:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.553 19:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.553 19:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.553 { 00:19:17.553 "cntlid": 133, 00:19:17.553 "qid": 0, 00:19:17.553 "state": "enabled", 00:19:17.553 "thread": "nvmf_tgt_poll_group_000", 00:19:17.553 "listen_address": { 00:19:17.553 "trtype": "RDMA", 00:19:17.553 "adrfam": "IPv4", 00:19:17.553 "traddr": "192.168.100.8", 00:19:17.553 "trsvcid": "4420" 00:19:17.553 }, 00:19:17.553 "peer_address": { 00:19:17.553 "trtype": "RDMA", 00:19:17.553 "adrfam": "IPv4", 00:19:17.553 "traddr": "192.168.100.8", 00:19:17.553 "trsvcid": "44228" 00:19:17.553 }, 00:19:17.553 "auth": { 00:19:17.553 "state": "completed", 00:19:17.553 "digest": "sha512", 00:19:17.553 "dhgroup": "ffdhe6144" 00:19:17.553 } 00:19:17.553 } 00:19:17.553 ]' 00:19:17.553 19:10:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.812 19:10:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:17.812 19:10:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.812 19:10:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:17.812 19:10:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.812 19:10:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.812 19:10:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.812 19:10:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.070 19:10:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmExNGVkZDYzM2U2MjU0MjJhNGVkY2Y0NDBlNTU1ZWNhNzg0Y2Y4OTMzMzk0ODNhHXNO5Q==: --dhchap-ctrl-secret DHHC-1:01:NzJkNDRkZmY5YjRlYjQwMmZkMDRjNjkxY2Y0ODI1MTfV3eFz: 00:19:19.006 19:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.006 19:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:19:19.006 19:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.006 19:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.006 19:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.006 19:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.006 19:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:19.006 19:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:19.265 19:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:19.265 19:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.265 19:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:19.265 19:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:19.265 19:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:19.265 19:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.265 19:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:19:19.265 19:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.265 19:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.265 19:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.265 19:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:19.265 19:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:19.523 00:19:19.523 19:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:19.523 19:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:19.523 19:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.782 19:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.782 19:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.782 19:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.782 19:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.782 19:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.782 19:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:19.782 { 00:19:19.782 "cntlid": 135, 00:19:19.782 "qid": 0, 00:19:19.782 "state": "enabled", 00:19:19.782 "thread": "nvmf_tgt_poll_group_000", 00:19:19.782 "listen_address": { 00:19:19.782 "trtype": "RDMA", 00:19:19.782 "adrfam": "IPv4", 00:19:19.782 "traddr": "192.168.100.8", 00:19:19.782 "trsvcid": "4420" 00:19:19.782 }, 00:19:19.782 "peer_address": { 00:19:19.782 "trtype": "RDMA", 00:19:19.782 "adrfam": "IPv4", 00:19:19.782 "traddr": "192.168.100.8", 00:19:19.782 "trsvcid": "40481" 00:19:19.782 }, 00:19:19.782 "auth": { 00:19:19.782 "state": "completed", 00:19:19.782 "digest": "sha512", 00:19:19.782 "dhgroup": "ffdhe6144" 00:19:19.782 } 00:19:19.782 } 00:19:19.782 ]' 00:19:19.782 19:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:19.782 19:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:19.782 19:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.782 19:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:19.782 19:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.041 19:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.041 19:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.041 19:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.041 19:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTI5ZWUwNDhmZTkwMDdhNDAwMzMwNzc4MTBiM2E2NjFhZjRjZDExNTUyOThhYTJjOTM2ZmZhNDYxZTY3NWQ5NLD7Guo=: 00:19:20.978 19:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.978 19:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:19:20.978 19:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.978 19:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.236 19:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.236 19:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:21.236 19:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.236 19:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:21.236 19:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:21.236 19:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:21.236 19:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.236 19:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:21.236 19:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:21.236 19:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:21.236 19:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.236 19:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.236 19:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.236 19:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.236 19:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.236 19:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.236 19:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.804 00:19:21.804 19:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.804 19:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.804 19:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.063 19:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.063 19:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.063 19:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.063 19:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.063 19:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.063 19:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.063 { 00:19:22.063 "cntlid": 137, 00:19:22.063 "qid": 0, 00:19:22.063 "state": "enabled", 00:19:22.063 "thread": "nvmf_tgt_poll_group_000", 00:19:22.063 "listen_address": { 00:19:22.063 "trtype": "RDMA", 00:19:22.063 "adrfam": "IPv4", 00:19:22.063 "traddr": "192.168.100.8", 00:19:22.063 "trsvcid": "4420" 00:19:22.063 }, 00:19:22.063 "peer_address": { 00:19:22.063 "trtype": "RDMA", 00:19:22.063 "adrfam": "IPv4", 00:19:22.063 "traddr": "192.168.100.8", 00:19:22.063 "trsvcid": "52022" 00:19:22.063 }, 00:19:22.063 "auth": { 00:19:22.063 "state": "completed", 00:19:22.063 "digest": "sha512", 00:19:22.063 "dhgroup": "ffdhe8192" 00:19:22.063 } 00:19:22.063 } 00:19:22.063 ]' 00:19:22.063 19:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.063 19:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:22.063 19:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.063 19:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:22.063 19:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.063 19:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.063 19:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.063 19:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.322 19:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTc4NjA1NzI5NjMyOGM0MjI2NTliZGM4NTk0ZjdiMDEwZmFjNWZiNGEwYzRhODk5Vo/hKA==: --dhchap-ctrl-secret DHHC-1:03:NWFmMDhmYjE4OTA2MmE0ZGYxZGU2NzljNDI4ZjRmOGE2NTljMDdmNDJlYjhmYzMwNzA3OTE3YThiNDJlNjVjM8S0ot4=: 00:19:23.257 19:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.257 19:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:19:23.257 19:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.257 19:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.257 19:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.257 19:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.257 19:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:23.257 19:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:23.515 19:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:23.515 19:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.515 19:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:23.515 19:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:23.515 19:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:23.515 19:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.515 19:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.515 19:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.515 19:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.515 19:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.515 19:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.515 19:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.083 00:19:24.083 19:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.083 19:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.083 19:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.342 19:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.342 19:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.342 19:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.342 19:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.342 19:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.342 19:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.342 { 00:19:24.342 "cntlid": 139, 00:19:24.342 "qid": 0, 00:19:24.342 "state": "enabled", 00:19:24.342 "thread": "nvmf_tgt_poll_group_000", 00:19:24.342 "listen_address": { 00:19:24.342 "trtype": "RDMA", 00:19:24.342 "adrfam": "IPv4", 00:19:24.342 "traddr": "192.168.100.8", 00:19:24.342 "trsvcid": "4420" 00:19:24.342 }, 00:19:24.342 "peer_address": { 00:19:24.342 "trtype": "RDMA", 00:19:24.342 "adrfam": "IPv4", 00:19:24.342 "traddr": "192.168.100.8", 00:19:24.342 "trsvcid": "52625" 00:19:24.342 }, 00:19:24.342 "auth": { 00:19:24.342 "state": "completed", 00:19:24.342 "digest": "sha512", 00:19:24.342 "dhgroup": "ffdhe8192" 00:19:24.342 } 00:19:24.342 } 00:19:24.342 ]' 00:19:24.342 19:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.342 19:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:24.342 19:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.342 19:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:24.342 19:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.342 19:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.342 19:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.342 19:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.601 19:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NTM0MDRmNDBhNWNlNmM4MjJjNWRiODY2Mzg1OWM0MWHhFORF: --dhchap-ctrl-secret DHHC-1:02:MjQwZDEzMjkzYTg4M2Y3YzQ1Y2FjZmY3NjE0N2UwOGI0OGYwM2NmYWJjN2Y1ZTYyEjfwJQ==: 00:19:25.537 19:10:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.537 19:10:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:19:25.537 19:10:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.537 19:10:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.537 19:10:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.537 19:10:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.537 19:10:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:25.537 19:10:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:25.796 19:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:25.796 19:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.796 19:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:25.796 19:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:25.796 19:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:25.796 19:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.796 19:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.796 19:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.796 19:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.796 19:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.796 19:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.796 19:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.364 00:19:26.364 19:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.364 19:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.364 19:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.364 19:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.364 19:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.364 19:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.364 19:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.623 19:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.623 19:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.623 { 00:19:26.623 "cntlid": 141, 00:19:26.623 "qid": 0, 00:19:26.623 "state": "enabled", 00:19:26.623 "thread": "nvmf_tgt_poll_group_000", 00:19:26.623 "listen_address": { 00:19:26.623 "trtype": "RDMA", 00:19:26.623 "adrfam": "IPv4", 00:19:26.623 "traddr": "192.168.100.8", 00:19:26.623 "trsvcid": "4420" 00:19:26.623 }, 00:19:26.623 "peer_address": { 00:19:26.623 "trtype": "RDMA", 00:19:26.623 "adrfam": "IPv4", 00:19:26.623 "traddr": "192.168.100.8", 00:19:26.623 "trsvcid": "60495" 00:19:26.623 }, 00:19:26.623 "auth": { 00:19:26.623 "state": "completed", 00:19:26.623 "digest": "sha512", 00:19:26.623 "dhgroup": "ffdhe8192" 00:19:26.623 } 00:19:26.623 } 00:19:26.623 ]' 00:19:26.623 19:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.623 19:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.623 19:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.623 19:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:26.623 19:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.623 19:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.623 19:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.623 19:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.881 19:10:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NmExNGVkZDYzM2U2MjU0MjJhNGVkY2Y0NDBlNTU1ZWNhNzg0Y2Y4OTMzMzk0ODNhHXNO5Q==: --dhchap-ctrl-secret DHHC-1:01:NzJkNDRkZmY5YjRlYjQwMmZkMDRjNjkxY2Y0ODI1MTfV3eFz: 00:19:27.818 19:10:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.818 19:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:19:27.818 19:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.818 19:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.818 19:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.818 19:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.818 19:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:27.818 19:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:28.076 19:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:28.076 19:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.076 19:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:28.076 19:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:28.076 19:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:28.076 19:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.076 19:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:19:28.076 19:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.076 19:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.077 19:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.077 19:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:28.077 19:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:28.643 00:19:28.643 19:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.643 19:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.643 19:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.643 19:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.643 19:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.643 19:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.643 19:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.643 19:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.643 19:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.643 { 00:19:28.643 "cntlid": 143, 00:19:28.643 "qid": 0, 00:19:28.643 "state": "enabled", 00:19:28.643 "thread": "nvmf_tgt_poll_group_000", 00:19:28.643 "listen_address": { 00:19:28.643 "trtype": "RDMA", 00:19:28.643 "adrfam": "IPv4", 00:19:28.643 "traddr": "192.168.100.8", 00:19:28.643 "trsvcid": "4420" 00:19:28.643 }, 00:19:28.643 "peer_address": { 00:19:28.643 "trtype": "RDMA", 00:19:28.643 "adrfam": "IPv4", 00:19:28.643 "traddr": "192.168.100.8", 00:19:28.643 "trsvcid": "45964" 00:19:28.643 }, 00:19:28.643 "auth": { 00:19:28.643 "state": "completed", 00:19:28.643 "digest": "sha512", 00:19:28.643 "dhgroup": "ffdhe8192" 00:19:28.643 } 00:19:28.643 } 00:19:28.643 ]' 00:19:28.643 19:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.643 19:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:28.902 19:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.902 19:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:28.902 19:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.902 19:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.902 19:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.902 19:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.162 19:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTI5ZWUwNDhmZTkwMDdhNDAwMzMwNzc4MTBiM2E2NjFhZjRjZDExNTUyOThhYTJjOTM2ZmZhNDYxZTY3NWQ5NLD7Guo=: 00:19:29.729 19:10:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.988 19:10:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:19:29.988 19:10:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.988 19:10:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.988 19:10:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.988 19:10:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:29.988 19:10:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:29.988 19:10:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:29.988 19:10:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:29.988 19:10:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:29.988 19:10:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:30.247 19:10:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:30.247 19:10:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.247 19:10:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:30.247 19:10:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:30.247 19:10:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:30.247 19:10:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.247 19:10:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.247 19:10:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.247 19:10:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.247 19:10:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.247 19:10:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.247 19:10:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.814 00:19:30.814 19:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.814 19:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.814 19:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.814 19:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.077 19:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.077 19:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.077 19:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.077 19:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.077 19:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.077 { 00:19:31.077 "cntlid": 145, 00:19:31.077 "qid": 0, 00:19:31.077 "state": "enabled", 00:19:31.077 "thread": "nvmf_tgt_poll_group_000", 00:19:31.077 "listen_address": { 00:19:31.077 "trtype": "RDMA", 00:19:31.077 "adrfam": "IPv4", 00:19:31.077 "traddr": "192.168.100.8", 00:19:31.077 "trsvcid": "4420" 00:19:31.077 }, 00:19:31.077 "peer_address": { 00:19:31.077 "trtype": "RDMA", 00:19:31.077 "adrfam": "IPv4", 00:19:31.077 "traddr": "192.168.100.8", 00:19:31.077 "trsvcid": "39322" 00:19:31.077 }, 00:19:31.077 "auth": { 00:19:31.077 "state": "completed", 00:19:31.077 "digest": "sha512", 00:19:31.077 "dhgroup": "ffdhe8192" 00:19:31.077 } 00:19:31.077 } 00:19:31.077 ]' 00:19:31.077 19:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.077 19:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:31.077 19:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.077 19:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:31.077 19:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.077 19:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.077 19:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.077 19:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.337 19:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OTc4NjA1NzI5NjMyOGM0MjI2NTliZGM4NTk0ZjdiMDEwZmFjNWZiNGEwYzRhODk5Vo/hKA==: --dhchap-ctrl-secret DHHC-1:03:NWFmMDhmYjE4OTA2MmE0ZGYxZGU2NzljNDI4ZjRmOGE2NTljMDdmNDJlYjhmYzMwNzA3OTE3YThiNDJlNjVjM8S0ot4=: 00:19:32.272 19:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.272 19:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:19:32.272 19:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.272 19:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.272 19:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.272 19:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 00:19:32.272 19:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.272 19:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.272 19:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.272 19:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:32.272 19:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:32.272 19:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:32.272 19:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:32.272 19:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:32.272 19:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:32.272 19:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:32.272 19:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:32.272 19:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:04.350 request: 00:20:04.350 { 00:20:04.350 "name": "nvme0", 00:20:04.351 "trtype": "rdma", 00:20:04.351 "traddr": "192.168.100.8", 00:20:04.351 "adrfam": "ipv4", 00:20:04.351 "trsvcid": "4420", 00:20:04.351 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:04.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:20:04.351 "prchk_reftag": false, 00:20:04.351 "prchk_guard": false, 00:20:04.351 "hdgst": false, 00:20:04.351 "ddgst": false, 00:20:04.351 "dhchap_key": "key2", 00:20:04.351 "method": "bdev_nvme_attach_controller", 00:20:04.351 "req_id": 1 00:20:04.351 } 00:20:04.351 Got JSON-RPC error response 00:20:04.351 response: 00:20:04.351 { 00:20:04.351 "code": -5, 00:20:04.351 "message": "Input/output error" 00:20:04.351 } 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:04.351 request: 00:20:04.351 { 00:20:04.351 "name": "nvme0", 00:20:04.351 "trtype": "rdma", 00:20:04.351 "traddr": "192.168.100.8", 00:20:04.351 "adrfam": "ipv4", 00:20:04.351 "trsvcid": "4420", 00:20:04.351 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:04.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:20:04.351 "prchk_reftag": false, 00:20:04.351 "prchk_guard": false, 00:20:04.351 "hdgst": false, 00:20:04.351 "ddgst": false, 00:20:04.351 "dhchap_key": "key1", 00:20:04.351 "dhchap_ctrlr_key": "ckey2", 00:20:04.351 "method": "bdev_nvme_attach_controller", 00:20:04.351 "req_id": 1 00:20:04.351 } 00:20:04.351 Got JSON-RPC error response 00:20:04.351 response: 00:20:04.351 { 00:20:04.351 "code": -5, 00:20:04.351 "message": "Input/output error" 00:20:04.351 } 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key1 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.351 19:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.471 request: 00:20:36.471 { 00:20:36.471 "name": "nvme0", 00:20:36.471 "trtype": "rdma", 00:20:36.471 "traddr": "192.168.100.8", 00:20:36.471 "adrfam": "ipv4", 00:20:36.471 "trsvcid": "4420", 00:20:36.471 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:36.471 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:20:36.471 "prchk_reftag": false, 00:20:36.471 "prchk_guard": false, 00:20:36.471 "hdgst": false, 00:20:36.471 "ddgst": false, 00:20:36.471 "dhchap_key": "key1", 00:20:36.471 "dhchap_ctrlr_key": "ckey1", 00:20:36.471 "method": "bdev_nvme_attach_controller", 00:20:36.471 "req_id": 1 00:20:36.471 } 00:20:36.471 Got JSON-RPC error response 00:20:36.471 response: 00:20:36.471 { 00:20:36.471 "code": -5, 00:20:36.471 "message": "Input/output error" 00:20:36.471 } 00:20:36.471 19:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:36.471 19:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:36.471 19:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:36.471 19:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:36.471 19:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:20:36.471 19:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.471 19:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.471 19:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.471 19:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 770465 00:20:36.471 19:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 770465 ']' 00:20:36.471 19:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 770465 00:20:36.471 19:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:36.471 19:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:36.471 19:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 770465 00:20:36.471 19:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:36.471 19:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:36.471 19:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 770465' 00:20:36.471 killing process with pid 770465 00:20:36.471 19:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 770465 00:20:36.471 19:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 770465 00:20:36.471 19:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:36.471 19:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:36.471 19:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:36.471 19:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.471 19:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=807701 00:20:36.471 19:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:36.471 19:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 807701 00:20:36.471 19:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 807701 ']' 00:20:36.471 19:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.471 19:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:36.472 19:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.472 19:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:36.472 19:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.472 19:11:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:36.472 19:11:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:36.472 19:11:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:36.472 19:11:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:36.472 19:11:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.472 19:11:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.472 19:11:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:36.472 19:11:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 807701 00:20:36.472 19:11:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 807701 ']' 00:20:36.472 19:11:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.472 19:11:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:36.472 19:11:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.472 19:11:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:36.472 19:11:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.472 19:11:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:36.472 19:11:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:36.472 19:11:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:20:36.472 19:11:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.472 19:11:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.472 19:11:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.472 19:11:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:20:36.472 19:11:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:36.472 19:11:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:36.472 19:11:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:36.472 19:11:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:36.472 19:11:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.472 19:11:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:20:36.472 19:11:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.472 19:11:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.472 19:11:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.472 19:11:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:36.472 19:11:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:36.472 00:20:36.472 19:11:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.472 19:11:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:36.472 19:11:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.472 19:11:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.472 19:11:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.472 19:11:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.472 19:11:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.472 19:11:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.472 19:11:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:36.472 { 00:20:36.472 "cntlid": 1, 00:20:36.472 "qid": 0, 00:20:36.472 "state": "enabled", 00:20:36.472 "thread": "nvmf_tgt_poll_group_000", 00:20:36.472 "listen_address": { 00:20:36.472 "trtype": "RDMA", 00:20:36.472 "adrfam": "IPv4", 00:20:36.472 "traddr": "192.168.100.8", 00:20:36.472 "trsvcid": "4420" 00:20:36.472 }, 00:20:36.472 "peer_address": { 00:20:36.472 "trtype": "RDMA", 00:20:36.472 "adrfam": "IPv4", 00:20:36.472 "traddr": "192.168.100.8", 00:20:36.472 "trsvcid": "58297" 00:20:36.472 }, 00:20:36.472 "auth": { 00:20:36.472 "state": "completed", 00:20:36.472 "digest": "sha512", 00:20:36.472 "dhgroup": "ffdhe8192" 00:20:36.472 } 00:20:36.472 } 00:20:36.472 ]' 00:20:36.472 19:11:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:36.472 19:11:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:36.472 19:11:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:36.472 19:11:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:36.472 19:11:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:36.472 19:11:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.472 19:11:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.472 19:11:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.731 19:11:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid 80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTI5ZWUwNDhmZTkwMDdhNDAwMzMwNzc4MTBiM2E2NjFhZjRjZDExNTUyOThhYTJjOTM2ZmZhNDYxZTY3NWQ5NLD7Guo=: 00:20:37.298 19:11:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.557 19:11:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:20:37.557 19:11:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.557 19:11:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.557 19:11:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.557 19:11:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --dhchap-key key3 00:20:37.557 19:11:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.557 19:11:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.557 19:11:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.557 19:11:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:37.557 19:11:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:37.815 19:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:37.815 19:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:37.815 19:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:37.815 19:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:37.815 19:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:37.815 19:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:37.815 19:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:37.815 19:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:37.815 19:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:09.936 request: 00:21:09.936 { 00:21:09.936 "name": "nvme0", 00:21:09.936 "trtype": "rdma", 00:21:09.936 "traddr": "192.168.100.8", 00:21:09.936 "adrfam": "ipv4", 00:21:09.936 "trsvcid": "4420", 00:21:09.936 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:09.936 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:21:09.936 "prchk_reftag": false, 00:21:09.936 "prchk_guard": false, 00:21:09.936 "hdgst": false, 00:21:09.936 "ddgst": false, 00:21:09.936 "dhchap_key": "key3", 00:21:09.936 "method": "bdev_nvme_attach_controller", 00:21:09.936 "req_id": 1 00:21:09.936 } 00:21:09.936 Got JSON-RPC error response 00:21:09.936 response: 00:21:09.936 { 00:21:09.936 "code": -5, 00:21:09.936 "message": "Input/output error" 00:21:09.936 } 00:21:09.936 19:12:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:09.936 19:12:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:09.936 19:12:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:09.936 19:12:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:09.936 19:12:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:21:09.936 19:12:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:21:09.936 19:12:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:09.936 19:12:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:09.936 19:12:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:09.936 19:12:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:09.936 19:12:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:09.936 19:12:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:09.936 19:12:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:09.936 19:12:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:09.936 19:12:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:09.936 19:12:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:09.936 19:12:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:42.018 request: 00:21:42.018 { 00:21:42.018 "name": "nvme0", 00:21:42.018 "trtype": "rdma", 00:21:42.018 "traddr": "192.168.100.8", 00:21:42.018 "adrfam": "ipv4", 00:21:42.018 "trsvcid": "4420", 00:21:42.018 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:42.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:21:42.018 "prchk_reftag": false, 00:21:42.018 "prchk_guard": false, 00:21:42.018 "hdgst": false, 00:21:42.018 "ddgst": false, 00:21:42.018 "dhchap_key": "key3", 00:21:42.018 "method": "bdev_nvme_attach_controller", 00:21:42.018 "req_id": 1 00:21:42.018 } 00:21:42.018 Got JSON-RPC error response 00:21:42.018 response: 00:21:42.018 { 00:21:42.018 "code": -5, 00:21:42.018 "message": "Input/output error" 00:21:42.018 } 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:42.018 request: 00:21:42.018 { 00:21:42.018 "name": "nvme0", 00:21:42.018 "trtype": "rdma", 00:21:42.018 "traddr": "192.168.100.8", 00:21:42.018 "adrfam": "ipv4", 00:21:42.018 "trsvcid": "4420", 00:21:42.018 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:42.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562", 00:21:42.018 "prchk_reftag": false, 00:21:42.018 "prchk_guard": false, 00:21:42.018 "hdgst": false, 00:21:42.018 "ddgst": false, 00:21:42.018 "dhchap_key": "key0", 00:21:42.018 "dhchap_ctrlr_key": "key1", 00:21:42.018 "method": "bdev_nvme_attach_controller", 00:21:42.018 "req_id": 1 00:21:42.018 } 00:21:42.018 Got JSON-RPC error response 00:21:42.018 response: 00:21:42.018 { 00:21:42.018 "code": -5, 00:21:42.018 "message": "Input/output error" 00:21:42.018 } 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:42.018 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.018 19:12:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.018 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:21:42.018 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:21:42.018 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 770710 00:21:42.018 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 770710 ']' 00:21:42.018 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 770710 00:21:42.018 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:42.018 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:42.018 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 770710 00:21:42.018 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:42.018 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:42.018 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 770710' 00:21:42.018 killing process with pid 770710 00:21:42.018 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 770710 00:21:42.018 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 770710 00:21:42.018 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:42.018 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:42.018 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:42.019 rmmod nvme_rdma 00:21:42.019 rmmod nvme_fabrics 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 807701 ']' 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 807701 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 807701 ']' 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 807701 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 807701 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 807701' 00:21:42.019 killing process with pid 807701 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 807701 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 807701 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.H8T /tmp/spdk.key-sha256.ItA /tmp/spdk.key-sha384.gwM /tmp/spdk.key-sha512.tJ7 /tmp/spdk.key-sha512.c8R /tmp/spdk.key-sha384.W5G /tmp/spdk.key-sha256.ly9 '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:21:42.019 00:21:42.019 real 4m48.694s 00:21:42.019 user 10m25.000s 00:21:42.019 sys 0m20.560s 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.019 ************************************ 00:21:42.019 END TEST nvmf_auth_target 00:21:42.019 ************************************ 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' rdma = tcp ']' 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' rdma = tcp ']' 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # [[ rdma == \r\d\m\a ]] 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@61 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:42.019 ************************************ 00:21:42.019 START TEST nvmf_srq_overwhelm 00:21:42.019 ************************************ 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:21:42.019 * Looking for test storage... 00:21:42.019 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.019 19:12:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.019 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.019 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:21:42.019 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.019 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@47 -- # : 0 00:21:42.019 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:42.019 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:42.019 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:42.019 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:42.019 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:42.019 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:42.019 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:42.019 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:42.019 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:42.019 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:42.019 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:21:42.019 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:21:42.019 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:42.019 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:42.019 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:42.020 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:42.020 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:42.020 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.020 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:42.020 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.020 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:42.020 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:42.020 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@285 -- # xtrace_disable 00:21:42.020 19:12:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:46.210 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:46.210 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # pci_devs=() 00:21:46.210 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:46.210 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:46.210 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:46.210 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:46.210 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:46.210 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # net_devs=() 00:21:46.210 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:46.210 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # e810=() 00:21:46.210 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # local -ga e810 00:21:46.210 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # x722=() 00:21:46.210 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # local -ga x722 00:21:46.210 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # mlx=() 00:21:46.210 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # local -ga mlx 00:21:46.210 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:46.210 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:46.210 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:46.210 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:46.210 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:46.210 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:46.210 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:46.210 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:46.210 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:46.210 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:46.210 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:46.210 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:21:46.211 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:21:46.211 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:21:46.211 Found net devices under 0000:af:00.0: mlx_0_0 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:21:46.211 Found net devices under 0000:af:00.1: mlx_0_1 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # is_hw=yes 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@420 -- # rdma_device_init 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # uname 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:46.211 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:46.471 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:46.471 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:21:46.471 altname enp175s0f0np0 00:21:46.471 altname ens801f0np0 00:21:46.471 inet 192.168.100.8/24 scope global mlx_0_0 00:21:46.471 valid_lft forever preferred_lft forever 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:46.471 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:46.471 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:21:46.471 altname enp175s0f1np1 00:21:46.471 altname ens801f1np1 00:21:46.471 inet 192.168.100.9/24 scope global mlx_0_1 00:21:46.471 valid_lft forever preferred_lft forever 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # return 0 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:46.471 192.168.100.9' 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:46.471 192.168.100.9' 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # head -n 1 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:46.471 192.168.100.9' 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # tail -n +2 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # head -n 1 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@481 -- # nvmfpid=822013 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # waitforlisten 822013 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@831 -- # '[' -z 822013 ']' 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:46.471 19:12:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:46.471 [2024-07-25 19:12:38.850245] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:46.471 [2024-07-25 19:12:38.850298] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:46.471 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.471 [2024-07-25 19:12:38.921765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:46.731 [2024-07-25 19:12:39.002371] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:46.731 [2024-07-25 19:12:39.002408] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:46.731 [2024-07-25 19:12:39.002416] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:46.731 [2024-07-25 19:12:39.002422] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:46.731 [2024-07-25 19:12:39.002428] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:46.731 [2024-07-25 19:12:39.002487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.731 [2024-07-25 19:12:39.002608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.731 [2024-07-25 19:12:39.002713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.731 [2024-07-25 19:12:39.002713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:47.298 19:12:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:47.298 19:12:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@864 -- # return 0 00:21:47.298 19:12:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:47.298 19:12:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:47.298 19:12:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:47.298 19:12:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.298 19:12:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:21:47.298 19:12:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.298 19:12:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:47.298 [2024-07-25 19:12:39.760597] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20a5df0/0x20aa2e0) succeed. 00:21:47.556 [2024-07-25 19:12:39.770022] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20a7430/0x20eb980) succeed. 00:21:47.556 19:12:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.556 19:12:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:21:47.556 19:12:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:21:47.556 19:12:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:21:47.556 19:12:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.556 19:12:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:47.556 19:12:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.556 19:12:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:47.556 19:12:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.556 19:12:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:47.556 Malloc0 00:21:47.556 19:12:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.556 19:12:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:21:47.556 19:12:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.556 19:12:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:47.556 19:12:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.556 19:12:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:21:47.556 19:12:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.556 19:12:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:47.556 [2024-07-25 19:12:39.864303] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:47.556 19:12:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.556 19:12:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:21:50.844 19:12:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:21:50.844 19:12:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:21:50.844 19:12:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:21:50.844 19:12:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:21:50.844 19:12:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:21:50.844 19:12:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:21:50.844 19:12:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:21:50.844 19:12:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:21:50.844 19:12:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:50.844 19:12:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.844 19:12:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:50.844 19:12:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.844 19:12:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:50.844 19:12:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.844 19:12:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:50.844 Malloc1 00:21:50.844 19:12:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.844 19:12:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:50.844 19:12:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.844 19:12:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:50.844 19:12:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.844 19:12:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:50.844 19:12:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.844 19:12:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:50.844 19:12:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.844 19:12:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:21:54.127 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:21:54.127 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:21:54.127 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:21:54.127 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme1n1 00:21:54.127 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:21:54.127 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme1n1 00:21:54.127 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:21:54.127 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:21:54.127 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:54.127 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.127 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:54.127 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.127 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:21:54.127 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.127 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:54.127 Malloc2 00:21:54.127 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.127 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:21:54.127 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.127 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:54.127 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.127 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:21:54.127 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.127 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:54.127 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.127 19:12:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:21:57.416 19:12:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:21:57.416 19:12:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:21:57.416 19:12:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:21:57.416 19:12:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme2n1 00:21:57.416 19:12:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme2n1 00:21:57.416 19:12:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:21:57.416 19:12:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:21:57.416 19:12:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:21:57.416 19:12:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:21:57.416 19:12:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.416 19:12:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:57.416 19:12:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.416 19:12:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:21:57.416 19:12:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.416 19:12:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:57.416 Malloc3 00:21:57.416 19:12:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.416 19:12:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:21:57.416 19:12:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.416 19:12:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:57.416 19:12:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.416 19:12:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:21:57.416 19:12:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.416 19:12:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:57.417 19:12:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.417 19:12:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:22:00.704 19:12:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:22:00.704 19:12:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:22:00.704 19:12:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme3n1 00:22:00.704 19:12:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:22:00.704 19:12:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:22:00.704 19:12:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme3n1 00:22:00.704 19:12:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:22:00.704 19:12:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:22:00.704 19:12:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:22:00.704 19:12:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.704 19:12:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:00.704 19:12:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.704 19:12:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:22:00.704 19:12:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.704 19:12:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:00.704 Malloc4 00:22:00.704 19:12:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.704 19:12:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:22:00.704 19:12:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.704 19:12:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:00.704 19:12:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.704 19:12:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:22:00.704 19:12:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.704 19:12:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:00.704 19:12:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.704 19:12:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:22:03.237 19:12:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:22:03.237 19:12:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:22:03.237 19:12:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme4n1 00:22:03.237 19:12:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:22:03.237 19:12:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:22:03.237 19:12:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme4n1 00:22:03.237 19:12:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:22:03.237 19:12:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:22:03.237 19:12:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:22:03.237 19:12:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.237 19:12:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:03.237 19:12:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.237 19:12:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:22:03.237 19:12:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.237 19:12:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:03.237 Malloc5 00:22:03.237 19:12:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.237 19:12:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:22:03.237 19:12:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.237 19:12:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:03.237 19:12:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.237 19:12:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:22:03.237 19:12:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.237 19:12:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:03.237 19:12:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.237 19:12:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:22:06.528 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:22:06.528 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:22:06.528 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme5n1 00:22:06.528 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:22:06.528 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:22:06.528 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme5n1 00:22:06.528 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:22:06.528 19:12:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:22:06.528 [global] 00:22:06.528 thread=1 00:22:06.528 invalidate=1 00:22:06.528 rw=read 00:22:06.528 time_based=1 00:22:06.528 runtime=10 00:22:06.528 ioengine=libaio 00:22:06.528 direct=1 00:22:06.528 bs=1048576 00:22:06.528 iodepth=128 00:22:06.528 norandommap=1 00:22:06.528 numjobs=13 00:22:06.528 00:22:06.528 [job0] 00:22:06.528 filename=/dev/nvme0n1 00:22:06.528 [job1] 00:22:06.528 filename=/dev/nvme1n1 00:22:06.528 [job2] 00:22:06.528 filename=/dev/nvme2n1 00:22:06.528 [job3] 00:22:06.528 filename=/dev/nvme3n1 00:22:06.528 [job4] 00:22:06.528 filename=/dev/nvme4n1 00:22:06.528 [job5] 00:22:06.528 filename=/dev/nvme5n1 00:22:06.528 Could not set queue depth (nvme0n1) 00:22:06.528 Could not set queue depth (nvme1n1) 00:22:06.528 Could not set queue depth (nvme2n1) 00:22:06.528 Could not set queue depth (nvme3n1) 00:22:06.528 Could not set queue depth (nvme4n1) 00:22:06.528 Could not set queue depth (nvme5n1) 00:22:06.787 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:22:06.787 ... 00:22:06.787 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:22:06.787 ... 00:22:06.787 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:22:06.787 ... 00:22:06.787 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:22:06.787 ... 00:22:06.787 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:22:06.787 ... 00:22:06.787 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:22:06.787 ... 00:22:06.787 fio-3.35 00:22:06.787 Starting 78 threads 00:22:21.669 00:22:21.669 job0: (groupid=0, jobs=1): err= 0: pid=825631: Thu Jul 25 19:13:13 2024 00:22:21.669 read: IOPS=12, BW=12.5MiB/s (13.1MB/s)(152MiB/12151msec) 00:22:21.669 slat (usec): min=99, max=4266.1k, avg=65863.85, stdev=425373.46 00:22:21.669 clat (msec): min=638, max=12074, avg=9874.07, stdev=3992.13 00:22:21.669 lat (msec): min=639, max=12076, avg=9939.93, stdev=3944.76 00:22:21.669 clat percentiles (msec): 00:22:21.669 | 1.00th=[ 642], 5.00th=[ 651], 10.00th=[ 667], 20.00th=[11476], 00:22:21.669 | 30.00th=[11476], 40.00th=[11610], 50.00th=[11610], 60.00th=[11745], 00:22:21.669 | 70.00th=[11745], 80.00th=[11879], 90.00th=[11879], 95.00th=[11879], 00:22:21.669 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:22:21.669 | 99.99th=[12013] 00:22:21.669 bw ( KiB/s): min= 1799, max=43008, per=0.37%, avg=10190.20, stdev=18346.02, samples=5 00:22:21.669 iops : min= 1, max= 42, avg= 9.80, stdev=18.01, samples=5 00:22:21.669 lat (msec) : 750=13.82%, 2000=0.66%, >=2000=85.53% 00:22:21.669 cpu : usr=0.00%, sys=0.74%, ctx=239, majf=0, minf=32769 00:22:21.669 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=5.3%, 16=10.5%, 32=21.1%, >=64=58.6% 00:22:21.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.669 complete : 0=0.0%, 4=96.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=3.8% 00:22:21.669 issued rwts: total=152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.669 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.669 job0: (groupid=0, jobs=1): err= 0: pid=825632: Thu Jul 25 19:13:13 2024 00:22:21.669 read: IOPS=2, BW=2878KiB/s (2948kB/s)(40.0MiB/14230msec) 00:22:21.669 slat (usec): min=761, max=8583.5k, avg=251076.91, stdev=1365678.10 00:22:21.669 clat (msec): min=4185, max=14225, avg=13879.02, stdev=1588.48 00:22:21.669 lat (msec): min=12769, max=14229, avg=14130.09, stdev=229.33 00:22:21.669 clat percentiles (msec): 00:22:21.669 | 1.00th=[ 4178], 5.00th=[12818], 10.00th=[14026], 20.00th=[14026], 00:22:21.669 | 30.00th=[14160], 40.00th=[14160], 50.00th=[14160], 60.00th=[14160], 00:22:21.669 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:22:21.669 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:22:21.669 | 99.99th=[14160] 00:22:21.669 lat (msec) : >=2000=100.00% 00:22:21.669 cpu : usr=0.00%, sys=0.24%, ctx=51, majf=0, minf=10241 00:22:21.669 IO depths : 1=2.5%, 2=5.0%, 4=10.0%, 8=20.0%, 16=40.0%, 32=22.5%, >=64=0.0% 00:22:21.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.669 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:21.669 issued rwts: total=40,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.669 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.669 job0: (groupid=0, jobs=1): err= 0: pid=825633: Thu Jul 25 19:13:13 2024 00:22:21.669 read: IOPS=1, BW=1187KiB/s (1215kB/s)(14.0MiB/12079msec) 00:22:21.669 slat (usec): min=809, max=8570.7k, avg=862705.31, stdev=2305607.99 00:22:21.669 clat (usec): min=939, max=12068k, avg=10235613.02, stdev=3263897.36 00:22:21.669 lat (msec): min=8571, max=12078, avg=11098.32, stdev=1433.63 00:22:21.669 clat percentiles (usec): 00:22:21.669 | 1.00th=[ 938], 5.00th=[ 938], 10.00th=[ 8556381], 00:22:21.669 | 20.00th=[ 8657044], 30.00th=[10670310], 40.00th=[10670310], 00:22:21.669 | 50.00th=[12012487], 60.00th=[12012487], 70.00th=[12012487], 00:22:21.669 | 80.00th=[12012487], 90.00th=[12012487], 95.00th=[12012487], 00:22:21.669 | 99.00th=[12012487], 99.50th=[12012487], 99.90th=[12012487], 00:22:21.669 | 99.95th=[12012487], 99.99th=[12012487] 00:22:21.669 lat (usec) : 1000=7.14% 00:22:21.669 lat (msec) : >=2000=92.86% 00:22:21.669 cpu : usr=0.00%, sys=0.09%, ctx=24, majf=0, minf=3585 00:22:21.669 IO depths : 1=7.1%, 2=14.3%, 4=28.6%, 8=50.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:21.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.669 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.669 issued rwts: total=14,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.669 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.669 job0: (groupid=0, jobs=1): err= 0: pid=825634: Thu Jul 25 19:13:13 2024 00:22:21.669 read: IOPS=0, BW=290KiB/s (297kB/s)(4096KiB/14130msec) 00:22:21.669 slat (msec): min=21, max=11943, avg=3020.60, stdev=5948.91 00:22:21.670 clat (msec): min=2047, max=14059, avg=11034.04, stdev=5991.08 00:22:21.670 lat (msec): min=13991, max=14129, avg=14054.63, stdev=57.65 00:22:21.670 clat percentiles (msec): 00:22:21.670 | 1.00th=[ 2056], 5.00th=[ 2056], 10.00th=[ 2056], 20.00th=[ 2056], 00:22:21.670 | 30.00th=[14026], 40.00th=[14026], 50.00th=[14026], 60.00th=[14026], 00:22:21.670 | 70.00th=[14026], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:22:21.670 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:22:21.670 | 99.99th=[14026] 00:22:21.670 lat (msec) : >=2000=100.00% 00:22:21.670 cpu : usr=0.00%, sys=0.02%, ctx=23, majf=0, minf=1025 00:22:21.670 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:21.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.670 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.670 issued rwts: total=4,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.670 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.670 job0: (groupid=0, jobs=1): err= 0: pid=825635: Thu Jul 25 19:13:13 2024 00:22:21.670 read: IOPS=36, BW=36.2MiB/s (38.0MB/s)(437MiB/12063msec) 00:22:21.670 slat (usec): min=42, max=2192.2k, avg=27470.64, stdev=225410.33 00:22:21.670 clat (msec): min=56, max=11028, avg=3391.37, stdev=4591.55 00:22:21.670 lat (msec): min=261, max=11030, avg=3418.84, stdev=4601.84 00:22:21.670 clat percentiles (msec): 00:22:21.670 | 1.00th=[ 262], 5.00th=[ 264], 10.00th=[ 264], 20.00th=[ 266], 00:22:21.670 | 30.00th=[ 268], 40.00th=[ 275], 50.00th=[ 368], 60.00th=[ 472], 00:22:21.670 | 70.00th=[ 4279], 80.00th=[10805], 90.00th=[10939], 95.00th=[10939], 00:22:21.670 | 99.00th=[11073], 99.50th=[11073], 99.90th=[11073], 99.95th=[11073], 00:22:21.670 | 99.99th=[11073] 00:22:21.670 bw ( KiB/s): min= 2048, max=346112, per=3.24%, avg=90381.29, stdev=142907.30, samples=7 00:22:21.670 iops : min= 2, max= 338, avg=88.14, stdev=139.63, samples=7 00:22:21.670 lat (msec) : 100=0.23%, 500=63.16%, 750=2.75%, >=2000=33.87% 00:22:21.670 cpu : usr=0.02%, sys=0.90%, ctx=448, majf=0, minf=32769 00:22:21.670 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.7%, 32=7.3%, >=64=85.6% 00:22:21.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.670 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:22:21.670 issued rwts: total=437,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.670 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.670 job0: (groupid=0, jobs=1): err= 0: pid=825636: Thu Jul 25 19:13:13 2024 00:22:21.670 read: IOPS=20, BW=20.5MiB/s (21.5MB/s)(291MiB/14164msec) 00:22:21.670 slat (usec): min=42, max=10796k, avg=41630.68, stdev=632638.88 00:22:21.670 clat (msec): min=426, max=13300, avg=5992.39, stdev=6242.41 00:22:21.670 lat (msec): min=429, max=13304, avg=6034.03, stdev=6250.18 00:22:21.670 clat percentiles (msec): 00:22:21.670 | 1.00th=[ 426], 5.00th=[ 439], 10.00th=[ 447], 20.00th=[ 472], 00:22:21.670 | 30.00th=[ 506], 40.00th=[ 542], 50.00th=[ 550], 60.00th=[12953], 00:22:21.670 | 70.00th=[12953], 80.00th=[13087], 90.00th=[13221], 95.00th=[13221], 00:22:21.670 | 99.00th=[13355], 99.50th=[13355], 99.90th=[13355], 99.95th=[13355], 00:22:21.670 | 99.99th=[13355] 00:22:21.670 bw ( KiB/s): min= 2052, max=178176, per=4.01%, avg=111958.67, stdev=95846.15, samples=3 00:22:21.670 iops : min= 2, max= 174, avg=109.33, stdev=93.60, samples=3 00:22:21.670 lat (msec) : 500=29.90%, 750=26.12%, >=2000=43.99% 00:22:21.670 cpu : usr=0.01%, sys=0.76%, ctx=452, majf=0, minf=32769 00:22:21.670 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.7%, 16=5.5%, 32=11.0%, >=64=78.4% 00:22:21.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.670 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:22:21.670 issued rwts: total=291,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.670 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.670 job0: (groupid=0, jobs=1): err= 0: pid=825637: Thu Jul 25 19:13:13 2024 00:22:21.670 read: IOPS=14, BW=14.8MiB/s (15.5MB/s)(209MiB/14125msec) 00:22:21.670 slat (usec): min=223, max=10796k, avg=57778.44, stdev=746394.49 00:22:21.670 clat (msec): min=642, max=13534, avg=8303.73, stdev=6130.74 00:22:21.670 lat (msec): min=657, max=13538, avg=8361.51, stdev=6122.41 00:22:21.670 clat percentiles (msec): 00:22:21.670 | 1.00th=[ 659], 5.00th=[ 659], 10.00th=[ 676], 20.00th=[ 676], 00:22:21.670 | 30.00th=[ 684], 40.00th=[12818], 50.00th=[12953], 60.00th=[13087], 00:22:21.670 | 70.00th=[13221], 80.00th=[13355], 90.00th=[13489], 95.00th=[13489], 00:22:21.670 | 99.00th=[13489], 99.50th=[13489], 99.90th=[13489], 99.95th=[13489], 00:22:21.670 | 99.99th=[13489] 00:22:21.670 bw ( KiB/s): min= 2052, max=83968, per=2.01%, avg=55980.00, stdev=46714.24, samples=3 00:22:21.670 iops : min= 2, max= 82, avg=54.67, stdev=45.62, samples=3 00:22:21.670 lat (msec) : 750=38.28%, 1000=0.48%, >=2000=61.24% 00:22:21.670 cpu : usr=0.00%, sys=0.66%, ctx=452, majf=0, minf=32769 00:22:21.670 IO depths : 1=0.5%, 2=1.0%, 4=1.9%, 8=3.8%, 16=7.7%, 32=15.3%, >=64=69.9% 00:22:21.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.670 complete : 0=0.0%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.2% 00:22:21.670 issued rwts: total=209,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.670 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.670 job0: (groupid=0, jobs=1): err= 0: pid=825638: Thu Jul 25 19:13:13 2024 00:22:21.670 read: IOPS=1, BW=1803KiB/s (1846kB/s)(25.0MiB/14200msec) 00:22:21.670 slat (usec): min=877, max=6433.9k, avg=400532.38, stdev=1348145.20 00:22:21.670 clat (msec): min=4186, max=14198, avg=13559.42, stdev=2091.20 00:22:21.670 lat (msec): min=10619, max=14199, avg=13959.95, stdev=749.81 00:22:21.670 clat percentiles (msec): 00:22:21.670 | 1.00th=[ 4178], 5.00th=[10671], 10.00th=[12818], 20.00th=[14026], 00:22:21.670 | 30.00th=[14160], 40.00th=[14160], 50.00th=[14160], 60.00th=[14160], 00:22:21.670 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:22:21.670 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:22:21.670 | 99.99th=[14160] 00:22:21.670 lat (msec) : >=2000=100.00% 00:22:21.670 cpu : usr=0.00%, sys=0.19%, ctx=37, majf=0, minf=6401 00:22:21.670 IO depths : 1=4.0%, 2=8.0%, 4=16.0%, 8=32.0%, 16=40.0%, 32=0.0%, >=64=0.0% 00:22:21.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.670 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:22:21.670 issued rwts: total=25,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.670 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.670 job0: (groupid=0, jobs=1): err= 0: pid=825639: Thu Jul 25 19:13:13 2024 00:22:21.670 read: IOPS=1, BW=1515KiB/s (1552kB/s)(21.0MiB/14191msec) 00:22:21.670 slat (usec): min=759, max=11938k, avg=578224.52, stdev=2602946.10 00:22:21.670 clat (msec): min=2047, max=14188, avg=13561.41, stdev=2639.11 00:22:21.670 lat (msec): min=13985, max=14190, avg=14139.64, stdev=71.54 00:22:21.670 clat percentiles (msec): 00:22:21.670 | 1.00th=[ 2056], 5.00th=[14026], 10.00th=[14026], 20.00th=[14026], 00:22:21.670 | 30.00th=[14160], 40.00th=[14160], 50.00th=[14160], 60.00th=[14160], 00:22:21.670 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:22:21.670 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:22:21.670 | 99.99th=[14160] 00:22:21.670 lat (msec) : >=2000=100.00% 00:22:21.670 cpu : usr=0.00%, sys=0.14%, ctx=34, majf=0, minf=5377 00:22:21.670 IO depths : 1=4.8%, 2=9.5%, 4=19.0%, 8=38.1%, 16=28.6%, 32=0.0%, >=64=0.0% 00:22:21.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.670 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:22:21.670 issued rwts: total=21,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.670 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.670 job0: (groupid=0, jobs=1): err= 0: pid=825640: Thu Jul 25 19:13:13 2024 00:22:21.670 read: IOPS=4, BW=4143KiB/s (4243kB/s)(49.0MiB/12110msec) 00:22:21.670 slat (usec): min=699, max=2134.2k, avg=245853.17, stdev=655521.35 00:22:21.670 clat (msec): min=62, max=12108, avg=7918.74, stdev=4171.99 00:22:21.670 lat (msec): min=2123, max=12109, avg=8164.60, stdev=4052.65 00:22:21.670 clat percentiles (msec): 00:22:21.670 | 1.00th=[ 63], 5.00th=[ 2123], 10.00th=[ 2140], 20.00th=[ 2165], 00:22:21.670 | 30.00th=[ 4279], 40.00th=[ 6409], 50.00th=[ 8658], 60.00th=[10671], 00:22:21.670 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:22:21.670 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:22:21.670 | 99.99th=[12147] 00:22:21.670 lat (msec) : 100=2.04%, >=2000=97.96% 00:22:21.670 cpu : usr=0.00%, sys=0.33%, ctx=45, majf=0, minf=12545 00:22:21.670 IO depths : 1=2.0%, 2=4.1%, 4=8.2%, 8=16.3%, 16=32.7%, 32=36.7%, >=64=0.0% 00:22:21.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.670 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:21.670 issued rwts: total=49,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.670 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.670 job0: (groupid=0, jobs=1): err= 0: pid=825641: Thu Jul 25 19:13:13 2024 00:22:21.670 read: IOPS=61, BW=61.9MiB/s (64.9MB/s)(748MiB/12079msec) 00:22:21.670 slat (usec): min=42, max=2109.1k, avg=16065.90, stdev=109957.11 00:22:21.670 clat (msec): min=55, max=5630, avg=1929.14, stdev=1476.83 00:22:21.670 lat (msec): min=838, max=5633, avg=1945.20, stdev=1477.80 00:22:21.670 clat percentiles (msec): 00:22:21.670 | 1.00th=[ 844], 5.00th=[ 852], 10.00th=[ 869], 20.00th=[ 1070], 00:22:21.670 | 30.00th=[ 1217], 40.00th=[ 1250], 50.00th=[ 1284], 60.00th=[ 1452], 00:22:21.670 | 70.00th=[ 1620], 80.00th=[ 1770], 90.00th=[ 5134], 95.00th=[ 5336], 00:22:21.670 | 99.00th=[ 5537], 99.50th=[ 5604], 99.90th=[ 5604], 99.95th=[ 5604], 00:22:21.670 | 99.99th=[ 5604] 00:22:21.670 bw ( KiB/s): min= 8192, max=155648, per=3.03%, avg=84614.60, stdev=42624.47, samples=15 00:22:21.670 iops : min= 8, max= 152, avg=82.47, stdev=41.66, samples=15 00:22:21.670 lat (msec) : 100=0.13%, 1000=16.71%, 2000=65.37%, >=2000=17.78% 00:22:21.670 cpu : usr=0.06%, sys=1.27%, ctx=1465, majf=0, minf=32769 00:22:21.670 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.3%, >=64=91.6% 00:22:21.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.670 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:22:21.670 issued rwts: total=748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.670 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.670 job0: (groupid=0, jobs=1): err= 0: pid=825642: Thu Jul 25 19:13:13 2024 00:22:21.671 read: IOPS=5, BW=5749KiB/s (5886kB/s)(68.0MiB/12113msec) 00:22:21.671 slat (usec): min=403, max=4228.9k, avg=177326.84, stdev=672243.62 00:22:21.671 clat (msec): min=54, max=12112, avg=8804.51, stdev=2736.78 00:22:21.671 lat (msec): min=4283, max=12112, avg=8981.84, stdev=2545.33 00:22:21.671 clat percentiles (msec): 00:22:21.671 | 1.00th=[ 55], 5.00th=[ 4279], 10.00th=[ 4329], 20.00th=[ 6409], 00:22:21.671 | 30.00th=[ 8490], 40.00th=[ 8490], 50.00th=[ 8658], 60.00th=[ 8658], 00:22:21.671 | 70.00th=[10805], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:22:21.671 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:22:21.671 | 99.99th=[12147] 00:22:21.671 lat (msec) : 100=1.47%, >=2000=98.53% 00:22:21.671 cpu : usr=0.00%, sys=0.39%, ctx=37, majf=0, minf=17409 00:22:21.671 IO depths : 1=1.5%, 2=2.9%, 4=5.9%, 8=11.8%, 16=23.5%, 32=47.1%, >=64=7.4% 00:22:21.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.671 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:22:21.671 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.671 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.671 job0: (groupid=0, jobs=1): err= 0: pid=825643: Thu Jul 25 19:13:13 2024 00:22:21.671 read: IOPS=103, BW=103MiB/s (108MB/s)(1246MiB/12086msec) 00:22:21.671 slat (usec): min=35, max=2111.5k, avg=8065.67, stdev=60833.23 00:22:21.671 clat (msec): min=530, max=6364, avg=1173.11, stdev=803.25 00:22:21.671 lat (msec): min=538, max=6427, avg=1181.18, stdev=812.09 00:22:21.671 clat percentiles (msec): 00:22:21.671 | 1.00th=[ 542], 5.00th=[ 558], 10.00th=[ 575], 20.00th=[ 609], 00:22:21.671 | 30.00th=[ 676], 40.00th=[ 760], 50.00th=[ 818], 60.00th=[ 894], 00:22:21.671 | 70.00th=[ 1036], 80.00th=[ 2123], 90.00th=[ 2802], 95.00th=[ 2869], 00:22:21.671 | 99.00th=[ 2937], 99.50th=[ 2937], 99.90th=[ 4245], 99.95th=[ 6342], 00:22:21.671 | 99.99th=[ 6342] 00:22:21.671 bw ( KiB/s): min= 1935, max=225280, per=5.13%, avg=143198.31, stdev=71413.10, samples=16 00:22:21.671 iops : min= 1, max= 220, avg=139.75, stdev=69.82, samples=16 00:22:21.671 lat (msec) : 750=39.00%, 1000=28.49%, 2000=12.20%, >=2000=20.30% 00:22:21.671 cpu : usr=0.04%, sys=1.73%, ctx=1552, majf=0, minf=32769 00:22:21.671 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.6%, >=64=94.9% 00:22:21.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.671 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:21.671 issued rwts: total=1246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.671 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.671 job1: (groupid=0, jobs=1): err= 0: pid=825644: Thu Jul 25 19:13:13 2024 00:22:21.671 read: IOPS=4, BW=4137KiB/s (4236kB/s)(49.0MiB/12129msec) 00:22:21.671 slat (usec): min=794, max=2138.9k, avg=204304.57, stdev=600056.42 00:22:21.671 clat (msec): min=2117, max=12127, avg=10127.29, stdev=3415.54 00:22:21.671 lat (msec): min=2130, max=12128, avg=10331.59, stdev=3220.26 00:22:21.671 clat percentiles (msec): 00:22:21.671 | 1.00th=[ 2123], 5.00th=[ 2140], 10.00th=[ 4245], 20.00th=[ 6409], 00:22:21.671 | 30.00th=[10671], 40.00th=[12013], 50.00th=[12013], 60.00th=[12147], 00:22:21.671 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:22:21.671 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:22:21.671 | 99.99th=[12147] 00:22:21.671 lat (msec) : >=2000=100.00% 00:22:21.671 cpu : usr=0.01%, sys=0.35%, ctx=81, majf=0, minf=12545 00:22:21.671 IO depths : 1=2.0%, 2=4.1%, 4=8.2%, 8=16.3%, 16=32.7%, 32=36.7%, >=64=0.0% 00:22:21.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.671 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:21.671 issued rwts: total=49,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.671 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.671 job1: (groupid=0, jobs=1): err= 0: pid=825645: Thu Jul 25 19:13:13 2024 00:22:21.671 read: IOPS=2, BW=2632KiB/s (2696kB/s)(31.0MiB/12059msec) 00:22:21.671 slat (usec): min=751, max=2120.7k, avg=386588.41, stdev=790244.81 00:22:21.671 clat (msec): min=74, max=12054, avg=6658.62, stdev=4066.66 00:22:21.671 lat (msec): min=2119, max=12058, avg=7045.21, stdev=3988.75 00:22:21.671 clat percentiles (msec): 00:22:21.671 | 1.00th=[ 74], 5.00th=[ 2123], 10.00th=[ 2140], 20.00th=[ 2165], 00:22:21.671 | 30.00th=[ 4279], 40.00th=[ 4329], 50.00th=[ 6409], 60.00th=[ 8557], 00:22:21.671 | 70.00th=[10671], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:22:21.671 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:22:21.671 | 99.99th=[12013] 00:22:21.671 lat (msec) : 100=3.23%, >=2000=96.77% 00:22:21.671 cpu : usr=0.01%, sys=0.22%, ctx=54, majf=0, minf=7937 00:22:21.671 IO depths : 1=3.2%, 2=6.5%, 4=12.9%, 8=25.8%, 16=51.6%, 32=0.0%, >=64=0.0% 00:22:21.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.671 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:22:21.671 issued rwts: total=31,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.671 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.671 job1: (groupid=0, jobs=1): err= 0: pid=825646: Thu Jul 25 19:13:13 2024 00:22:21.671 read: IOPS=84, BW=84.3MiB/s (88.4MB/s)(1198MiB/14210msec) 00:22:21.671 slat (usec): min=38, max=2243.9k, avg=8345.43, stdev=108093.84 00:22:21.671 clat (msec): min=257, max=11045, avg=1469.06, stdev=3182.30 00:22:21.671 lat (msec): min=259, max=11047, avg=1477.41, stdev=3192.46 00:22:21.671 clat percentiles (msec): 00:22:21.671 | 1.00th=[ 264], 5.00th=[ 271], 10.00th=[ 271], 20.00th=[ 305], 00:22:21.671 | 30.00th=[ 326], 40.00th=[ 342], 50.00th=[ 351], 60.00th=[ 368], 00:22:21.671 | 70.00th=[ 380], 80.00th=[ 422], 90.00th=[ 8490], 95.00th=[10805], 00:22:21.671 | 99.00th=[11073], 99.50th=[11073], 99.90th=[11073], 99.95th=[11073], 00:22:21.671 | 99.99th=[11073] 00:22:21.671 bw ( KiB/s): min= 2048, max=403456, per=8.73%, avg=243712.00, stdev=183969.80, samples=9 00:22:21.671 iops : min= 2, max= 394, avg=238.00, stdev=179.66, samples=9 00:22:21.671 lat (msec) : 500=83.64%, 750=4.92%, 1000=0.08%, >=2000=11.35% 00:22:21.671 cpu : usr=0.06%, sys=1.37%, ctx=1043, majf=0, minf=32769 00:22:21.671 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.7%, >=64=94.7% 00:22:21.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.671 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:21.671 issued rwts: total=1198,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.671 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.671 job1: (groupid=0, jobs=1): err= 0: pid=825647: Thu Jul 25 19:13:13 2024 00:22:21.671 read: IOPS=6, BW=6515KiB/s (6672kB/s)(77.0MiB/12102msec) 00:22:21.671 slat (usec): min=647, max=2148.5k, avg=129869.87, stdev=484210.47 00:22:21.671 clat (msec): min=2101, max=12099, avg=9818.02, stdev=3497.71 00:22:21.671 lat (msec): min=2116, max=12101, avg=9947.89, stdev=3391.46 00:22:21.671 clat percentiles (msec): 00:22:21.671 | 1.00th=[ 2106], 5.00th=[ 2123], 10.00th=[ 4245], 20.00th=[ 6409], 00:22:21.671 | 30.00th=[10671], 40.00th=[11879], 50.00th=[12013], 60.00th=[12013], 00:22:21.671 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12147], 95.00th=[12147], 00:22:21.671 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:22:21.671 | 99.99th=[12147] 00:22:21.671 lat (msec) : >=2000=100.00% 00:22:21.671 cpu : usr=0.00%, sys=0.53%, ctx=98, majf=0, minf=19713 00:22:21.671 IO depths : 1=1.3%, 2=2.6%, 4=5.2%, 8=10.4%, 16=20.8%, 32=41.6%, >=64=18.2% 00:22:21.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.671 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:22:21.671 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.671 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.671 job1: (groupid=0, jobs=1): err= 0: pid=825648: Thu Jul 25 19:13:13 2024 00:22:21.671 read: IOPS=20, BW=20.6MiB/s (21.6MB/s)(249MiB/12097msec) 00:22:21.671 slat (usec): min=84, max=2213.2k, avg=48200.75, stdev=296851.17 00:22:21.671 clat (msec): min=92, max=11364, avg=5949.83, stdev=4995.73 00:22:21.671 lat (msec): min=634, max=11366, avg=5998.03, stdev=4990.85 00:22:21.671 clat percentiles (msec): 00:22:21.671 | 1.00th=[ 634], 5.00th=[ 642], 10.00th=[ 651], 20.00th=[ 651], 00:22:21.671 | 30.00th=[ 659], 40.00th=[ 701], 50.00th=[ 7080], 60.00th=[10805], 00:22:21.671 | 70.00th=[10939], 80.00th=[11073], 90.00th=[11208], 95.00th=[11342], 00:22:21.671 | 99.00th=[11342], 99.50th=[11342], 99.90th=[11342], 99.95th=[11342], 00:22:21.671 | 99.99th=[11342] 00:22:21.671 bw ( KiB/s): min= 2048, max=122880, per=1.12%, avg=31232.50, stdev=48461.09, samples=8 00:22:21.671 iops : min= 2, max= 120, avg=30.50, stdev=47.33, samples=8 00:22:21.671 lat (msec) : 100=0.40%, 750=42.57%, >=2000=57.03% 00:22:21.671 cpu : usr=0.02%, sys=0.77%, ctx=512, majf=0, minf=32769 00:22:21.671 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.2%, 16=6.4%, 32=12.9%, >=64=74.7% 00:22:21.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.671 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:22:21.671 issued rwts: total=249,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.671 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.671 job1: (groupid=0, jobs=1): err= 0: pid=825649: Thu Jul 25 19:13:13 2024 00:22:21.671 read: IOPS=3, BW=3748KiB/s (3838kB/s)(52.0MiB/14208msec) 00:22:21.671 slat (usec): min=768, max=2135.9k, avg=192579.05, stdev=584263.72 00:22:21.671 clat (msec): min=4192, max=14206, avg=13079.96, stdev=2242.57 00:22:21.671 lat (msec): min=6323, max=14206, avg=13272.54, stdev=1862.13 00:22:21.671 clat percentiles (msec): 00:22:21.671 | 1.00th=[ 4178], 5.00th=[ 8490], 10.00th=[10671], 20.00th=[12818], 00:22:21.671 | 30.00th=[14026], 40.00th=[14160], 50.00th=[14160], 60.00th=[14160], 00:22:21.671 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:22:21.671 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:22:21.671 | 99.99th=[14160] 00:22:21.671 lat (msec) : >=2000=100.00% 00:22:21.671 cpu : usr=0.00%, sys=0.29%, ctx=72, majf=0, minf=13313 00:22:21.671 IO depths : 1=1.9%, 2=3.8%, 4=7.7%, 8=15.4%, 16=30.8%, 32=40.4%, >=64=0.0% 00:22:21.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.671 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:21.671 issued rwts: total=52,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.671 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.671 job1: (groupid=0, jobs=1): err= 0: pid=825650: Thu Jul 25 19:13:13 2024 00:22:21.671 read: IOPS=9, BW=9528KiB/s (9757kB/s)(132MiB/14186msec) 00:22:21.672 slat (usec): min=538, max=8566.5k, avg=91923.84, stdev=767198.06 00:22:21.672 clat (msec): min=2050, max=14173, avg=13130.30, stdev=2106.86 00:22:21.672 lat (msec): min=3542, max=14175, avg=13222.23, stdev=1870.99 00:22:21.672 clat percentiles (msec): 00:22:21.672 | 1.00th=[ 3540], 5.00th=[10671], 10.00th=[12818], 20.00th=[13489], 00:22:21.672 | 30.00th=[13489], 40.00th=[13624], 50.00th=[13624], 60.00th=[13758], 00:22:21.672 | 70.00th=[13892], 80.00th=[13892], 90.00th=[13892], 95.00th=[14026], 00:22:21.672 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:22:21.672 | 99.99th=[14160] 00:22:21.672 bw ( KiB/s): min= 2048, max= 8192, per=0.18%, avg=5120.00, stdev=4344.46, samples=2 00:22:21.672 iops : min= 2, max= 8, avg= 5.00, stdev= 4.24, samples=2 00:22:21.672 lat (msec) : >=2000=100.00% 00:22:21.672 cpu : usr=0.03%, sys=0.63%, ctx=220, majf=0, minf=32769 00:22:21.672 IO depths : 1=0.8%, 2=1.5%, 4=3.0%, 8=6.1%, 16=12.1%, 32=24.2%, >=64=52.3% 00:22:21.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.672 complete : 0=0.0%, 4=83.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=16.7% 00:22:21.672 issued rwts: total=132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.672 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.672 job1: (groupid=0, jobs=1): err= 0: pid=825651: Thu Jul 25 19:13:13 2024 00:22:21.672 read: IOPS=23, BW=23.1MiB/s (24.2MB/s)(328MiB/14197msec) 00:22:21.672 slat (usec): min=48, max=2188.3k, avg=30491.15, stdev=236248.23 00:22:21.672 clat (msec): min=434, max=13296, avg=5355.82, stdev=6051.15 00:22:21.672 lat (msec): min=437, max=13300, avg=5386.31, stdev=6064.54 00:22:21.672 clat percentiles (msec): 00:22:21.672 | 1.00th=[ 439], 5.00th=[ 443], 10.00th=[ 447], 20.00th=[ 451], 00:22:21.672 | 30.00th=[ 460], 40.00th=[ 472], 50.00th=[ 489], 60.00th=[ 2702], 00:22:21.672 | 70.00th=[12953], 80.00th=[13087], 90.00th=[13221], 95.00th=[13221], 00:22:21.672 | 99.00th=[13355], 99.50th=[13355], 99.90th=[13355], 99.95th=[13355], 00:22:21.672 | 99.99th=[13355] 00:22:21.672 bw ( KiB/s): min= 2048, max=217088, per=2.46%, avg=68608.00, stdev=102127.25, samples=6 00:22:21.672 iops : min= 2, max= 212, avg=67.00, stdev=99.73, samples=6 00:22:21.672 lat (msec) : 500=52.13%, 750=7.32%, >=2000=40.55% 00:22:21.672 cpu : usr=0.01%, sys=0.76%, ctx=508, majf=0, minf=32769 00:22:21.672 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.9%, 32=9.8%, >=64=80.8% 00:22:21.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.672 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:22:21.672 issued rwts: total=328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.672 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.672 job1: (groupid=0, jobs=1): err= 0: pid=825652: Thu Jul 25 19:13:13 2024 00:22:21.672 read: IOPS=2, BW=2685KiB/s (2749kB/s)(37.0MiB/14111msec) 00:22:21.672 slat (usec): min=574, max=2148.4k, avg=325579.94, stdev=743966.27 00:22:21.672 clat (msec): min=2063, max=14108, avg=11764.20, stdev=3463.88 00:22:21.672 lat (msec): min=4174, max=14110, avg=12089.78, stdev=3070.60 00:22:21.672 clat percentiles (msec): 00:22:21.672 | 1.00th=[ 2072], 5.00th=[ 4178], 10.00th=[ 4178], 20.00th=[10537], 00:22:21.672 | 30.00th=[10671], 40.00th=[12818], 50.00th=[14026], 60.00th=[14026], 00:22:21.672 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:22:21.672 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:22:21.672 | 99.99th=[14160] 00:22:21.672 lat (msec) : >=2000=100.00% 00:22:21.672 cpu : usr=0.01%, sys=0.18%, ctx=51, majf=0, minf=9473 00:22:21.672 IO depths : 1=2.7%, 2=5.4%, 4=10.8%, 8=21.6%, 16=43.2%, 32=16.2%, >=64=0.0% 00:22:21.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.672 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:21.672 issued rwts: total=37,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.672 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.672 job1: (groupid=0, jobs=1): err= 0: pid=825653: Thu Jul 25 19:13:13 2024 00:22:21.672 read: IOPS=3, BW=3550KiB/s (3635kB/s)(49.0MiB/14135msec) 00:22:21.672 slat (usec): min=456, max=2127.2k, avg=246270.07, stdev=657713.78 00:22:21.672 clat (msec): min=2066, max=14133, avg=10436.23, stdev=3874.33 00:22:21.672 lat (msec): min=4164, max=14134, avg=10682.50, stdev=3711.36 00:22:21.672 clat percentiles (msec): 00:22:21.672 | 1.00th=[ 2072], 5.00th=[ 4178], 10.00th=[ 4178], 20.00th=[ 6342], 00:22:21.672 | 30.00th=[ 8490], 40.00th=[ 8490], 50.00th=[12684], 60.00th=[12818], 00:22:21.672 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:22:21.672 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:22:21.672 | 99.99th=[14160] 00:22:21.672 lat (msec) : >=2000=100.00% 00:22:21.672 cpu : usr=0.00%, sys=0.27%, ctx=41, majf=0, minf=12545 00:22:21.672 IO depths : 1=2.0%, 2=4.1%, 4=8.2%, 8=16.3%, 16=32.7%, 32=36.7%, >=64=0.0% 00:22:21.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.672 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:21.672 issued rwts: total=49,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.672 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.672 job1: (groupid=0, jobs=1): err= 0: pid=825654: Thu Jul 25 19:13:13 2024 00:22:21.672 read: IOPS=2, BW=2541KiB/s (2602kB/s)(35.0MiB/14106msec) 00:22:21.672 slat (usec): min=724, max=2162.4k, avg=344549.41, stdev=770774.96 00:22:21.672 clat (msec): min=2045, max=14104, avg=12059.98, stdev=3831.94 00:22:21.672 lat (msec): min=4170, max=14105, avg=12404.53, stdev=3425.66 00:22:21.672 clat percentiles (msec): 00:22:21.672 | 1.00th=[ 2039], 5.00th=[ 4178], 10.00th=[ 4178], 20.00th=[ 8490], 00:22:21.672 | 30.00th=[14026], 40.00th=[14026], 50.00th=[14026], 60.00th=[14026], 00:22:21.672 | 70.00th=[14026], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:22:21.672 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:22:21.672 | 99.99th=[14160] 00:22:21.672 lat (msec) : >=2000=100.00% 00:22:21.672 cpu : usr=0.01%, sys=0.20%, ctx=32, majf=0, minf=8961 00:22:21.672 IO depths : 1=2.9%, 2=5.7%, 4=11.4%, 8=22.9%, 16=45.7%, 32=11.4%, >=64=0.0% 00:22:21.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.672 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:21.672 issued rwts: total=35,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.672 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.672 job1: (groupid=0, jobs=1): err= 0: pid=825655: Thu Jul 25 19:13:13 2024 00:22:21.672 read: IOPS=5, BW=5412KiB/s (5542kB/s)(75.0MiB/14191msec) 00:22:21.672 slat (usec): min=720, max=4273.7k, avg=133440.96, stdev=605319.86 00:22:21.672 clat (msec): min=4181, max=14187, avg=12995.79, stdev=2159.10 00:22:21.672 lat (msec): min=4191, max=14190, avg=13129.23, stdev=1900.82 00:22:21.672 clat percentiles (msec): 00:22:21.672 | 1.00th=[ 4178], 5.00th=[ 8490], 10.00th=[10671], 20.00th=[12818], 00:22:21.672 | 30.00th=[12818], 40.00th=[14026], 50.00th=[14026], 60.00th=[14160], 00:22:21.672 | 70.00th=[14160], 80.00th=[14160], 90.00th=[14160], 95.00th=[14160], 00:22:21.672 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:22:21.672 | 99.99th=[14160] 00:22:21.672 lat (msec) : >=2000=100.00% 00:22:21.672 cpu : usr=0.00%, sys=0.44%, ctx=81, majf=0, minf=19201 00:22:21.672 IO depths : 1=1.3%, 2=2.7%, 4=5.3%, 8=10.7%, 16=21.3%, 32=42.7%, >=64=16.0% 00:22:21.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.672 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:22:21.672 issued rwts: total=75,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.672 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.672 job1: (groupid=0, jobs=1): err= 0: pid=825656: Thu Jul 25 19:13:13 2024 00:22:21.672 read: IOPS=1, BW=1308KiB/s (1340kB/s)(18.0MiB/14089msec) 00:22:21.672 slat (usec): min=479, max=4278.4k, avg=668768.93, stdev=1220681.33 00:22:21.672 clat (msec): min=2050, max=14070, avg=11783.52, stdev=3429.67 00:22:21.672 lat (msec): min=6329, max=14088, avg=12452.29, stdev=2455.52 00:22:21.672 clat percentiles (msec): 00:22:21.672 | 1.00th=[ 2056], 5.00th=[ 2056], 10.00th=[ 6342], 20.00th=[ 8490], 00:22:21.672 | 30.00th=[10671], 40.00th=[12818], 50.00th=[13892], 60.00th=[13892], 00:22:21.672 | 70.00th=[14026], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:22:21.672 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:22:21.672 | 99.99th=[14026] 00:22:21.672 lat (msec) : >=2000=100.00% 00:22:21.672 cpu : usr=0.01%, sys=0.08%, ctx=35, majf=0, minf=4609 00:22:21.672 IO depths : 1=5.6%, 2=11.1%, 4=22.2%, 8=44.4%, 16=16.7%, 32=0.0%, >=64=0.0% 00:22:21.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.672 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:22:21.672 issued rwts: total=18,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.672 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.672 job2: (groupid=0, jobs=1): err= 0: pid=825657: Thu Jul 25 19:13:13 2024 00:22:21.672 read: IOPS=5, BW=6071KiB/s (6216kB/s)(72.0MiB/12145msec) 00:22:21.672 slat (usec): min=809, max=2143.0k, avg=139409.19, stdev=497464.53 00:22:21.672 clat (msec): min=2106, max=12143, avg=10126.33, stdev=2836.27 00:22:21.672 lat (msec): min=4249, max=12144, avg=10265.74, stdev=2678.84 00:22:21.672 clat percentiles (msec): 00:22:21.672 | 1.00th=[ 2106], 5.00th=[ 4279], 10.00th=[ 6409], 20.00th=[ 6409], 00:22:21.672 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[12013], 60.00th=[12147], 00:22:21.672 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:22:21.672 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:22:21.672 | 99.99th=[12147] 00:22:21.672 lat (msec) : >=2000=100.00% 00:22:21.672 cpu : usr=0.00%, sys=0.50%, ctx=102, majf=0, minf=18433 00:22:21.672 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.1%, 16=22.2%, 32=44.4%, >=64=12.5% 00:22:21.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.672 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:22:21.672 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.672 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.672 job2: (groupid=0, jobs=1): err= 0: pid=825658: Thu Jul 25 19:13:13 2024 00:22:21.672 read: IOPS=11, BW=11.3MiB/s (11.9MB/s)(137MiB/12103msec) 00:22:21.672 slat (usec): min=62, max=2188.0k, avg=87980.84, stdev=397607.55 00:22:21.672 clat (msec): min=48, max=12085, avg=10480.51, stdev=2370.59 00:22:21.672 lat (msec): min=2112, max=12087, avg=10568.49, stdev=2197.77 00:22:21.672 clat percentiles (msec): 00:22:21.673 | 1.00th=[ 2106], 5.00th=[ 4279], 10.00th=[ 6409], 20.00th=[10805], 00:22:21.673 | 30.00th=[10939], 40.00th=[11208], 50.00th=[11342], 60.00th=[11476], 00:22:21.673 | 70.00th=[11745], 80.00th=[11879], 90.00th=[12013], 95.00th=[12013], 00:22:21.673 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:22:21.673 | 99.99th=[12147] 00:22:21.673 bw ( KiB/s): min= 2048, max=14336, per=0.22%, avg=6144.00, stdev=7094.48, samples=3 00:22:21.673 iops : min= 2, max= 14, avg= 6.00, stdev= 6.93, samples=3 00:22:21.673 lat (msec) : 50=0.73%, >=2000=99.27% 00:22:21.673 cpu : usr=0.01%, sys=0.83%, ctx=302, majf=0, minf=32769 00:22:21.673 IO depths : 1=0.7%, 2=1.5%, 4=2.9%, 8=5.8%, 16=11.7%, 32=23.4%, >=64=54.0% 00:22:21.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.673 complete : 0=0.0%, 4=90.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=9.1% 00:22:21.673 issued rwts: total=137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.673 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.673 job2: (groupid=0, jobs=1): err= 0: pid=825659: Thu Jul 25 19:13:13 2024 00:22:21.673 read: IOPS=87, BW=87.2MiB/s (91.4MB/s)(882MiB/10118msec) 00:22:21.673 slat (usec): min=42, max=2113.6k, avg=11356.08, stdev=99104.82 00:22:21.673 clat (msec): min=96, max=6234, avg=1397.55, stdev=1495.34 00:22:21.673 lat (msec): min=150, max=6241, avg=1408.90, stdev=1503.47 00:22:21.673 clat percentiles (msec): 00:22:21.673 | 1.00th=[ 176], 5.00th=[ 418], 10.00th=[ 625], 20.00th=[ 743], 00:22:21.673 | 30.00th=[ 793], 40.00th=[ 827], 50.00th=[ 835], 60.00th=[ 844], 00:22:21.673 | 70.00th=[ 852], 80.00th=[ 953], 90.00th=[ 4933], 95.00th=[ 4933], 00:22:21.673 | 99.00th=[ 6141], 99.50th=[ 6208], 99.90th=[ 6208], 99.95th=[ 6208], 00:22:21.673 | 99.99th=[ 6208] 00:22:21.673 bw ( KiB/s): min=14336, max=172032, per=4.61%, avg=128656.42, stdev=54351.26, samples=12 00:22:21.673 iops : min= 14, max= 168, avg=125.58, stdev=53.04, samples=12 00:22:21.673 lat (msec) : 100=0.11%, 250=1.81%, 500=5.10%, 750=13.83%, 1000=61.22% 00:22:21.673 lat (msec) : 2000=2.15%, >=2000=15.76% 00:22:21.673 cpu : usr=0.04%, sys=1.60%, ctx=784, majf=0, minf=32769 00:22:21.673 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=92.9% 00:22:21.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.673 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:21.673 issued rwts: total=882,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.673 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.673 job2: (groupid=0, jobs=1): err= 0: pid=825660: Thu Jul 25 19:13:13 2024 00:22:21.673 read: IOPS=12, BW=12.6MiB/s (13.2MB/s)(178MiB/14150msec) 00:22:21.673 slat (usec): min=53, max=2178.6k, avg=67901.53, stdev=352752.10 00:22:21.673 clat (msec): min=801, max=13507, avg=9652.62, stdev=5244.09 00:22:21.673 lat (msec): min=809, max=13513, avg=9720.52, stdev=5215.38 00:22:21.673 clat percentiles (msec): 00:22:21.673 | 1.00th=[ 810], 5.00th=[ 844], 10.00th=[ 927], 20.00th=[ 1099], 00:22:21.673 | 30.00th=[ 8490], 40.00th=[12953], 50.00th=[13087], 60.00th=[13087], 00:22:21.673 | 70.00th=[13221], 80.00th=[13355], 90.00th=[13355], 95.00th=[13489], 00:22:21.673 | 99.00th=[13489], 99.50th=[13489], 99.90th=[13489], 99.95th=[13489], 00:22:21.673 | 99.99th=[13489] 00:22:21.673 bw ( KiB/s): min= 2048, max=83968, per=0.53%, avg=14921.14, stdev=30530.87, samples=7 00:22:21.673 iops : min= 2, max= 82, avg=14.57, stdev=29.82, samples=7 00:22:21.673 lat (msec) : 1000=11.80%, 2000=11.24%, >=2000=76.97% 00:22:21.673 cpu : usr=0.01%, sys=0.64%, ctx=367, majf=0, minf=32769 00:22:21.673 IO depths : 1=0.6%, 2=1.1%, 4=2.2%, 8=4.5%, 16=9.0%, 32=18.0%, >=64=64.6% 00:22:21.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.673 complete : 0=0.0%, 4=98.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.9% 00:22:21.673 issued rwts: total=178,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.673 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.673 job2: (groupid=0, jobs=1): err= 0: pid=825661: Thu Jul 25 19:13:13 2024 00:22:21.673 read: IOPS=12, BW=12.2MiB/s (12.8MB/s)(173MiB/14126msec) 00:22:21.673 slat (usec): min=80, max=2142.1k, avg=69680.23, stdev=349343.94 00:22:21.673 clat (msec): min=1124, max=13562, avg=9869.39, stdev=4968.68 00:22:21.673 lat (msec): min=1136, max=13584, avg=9939.07, stdev=4935.62 00:22:21.673 clat percentiles (msec): 00:22:21.673 | 1.00th=[ 1133], 5.00th=[ 1183], 10.00th=[ 1284], 20.00th=[ 1452], 00:22:21.673 | 30.00th=[ 9463], 40.00th=[12550], 50.00th=[12684], 60.00th=[12953], 00:22:21.673 | 70.00th=[13087], 80.00th=[13221], 90.00th=[13355], 95.00th=[13489], 00:22:21.673 | 99.00th=[13489], 99.50th=[13624], 99.90th=[13624], 99.95th=[13624], 00:22:21.673 | 99.99th=[13624] 00:22:21.673 bw ( KiB/s): min= 2035, max=43008, per=0.48%, avg=13454.57, stdev=17692.91, samples=7 00:22:21.673 iops : min= 1, max= 42, avg=12.86, stdev=17.44, samples=7 00:22:21.673 lat (msec) : 2000=21.97%, >=2000=78.03% 00:22:21.673 cpu : usr=0.00%, sys=0.79%, ctx=336, majf=0, minf=32769 00:22:21.673 IO depths : 1=0.6%, 2=1.2%, 4=2.3%, 8=4.6%, 16=9.2%, 32=18.5%, >=64=63.6% 00:22:21.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.673 complete : 0=0.0%, 4=97.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.1% 00:22:21.673 issued rwts: total=173,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.673 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.673 job2: (groupid=0, jobs=1): err= 0: pid=825662: Thu Jul 25 19:13:13 2024 00:22:21.673 read: IOPS=12, BW=12.8MiB/s (13.4MB/s)(180MiB/14086msec) 00:22:21.673 slat (usec): min=72, max=2170.8k, avg=66767.68, stdev=349413.82 00:22:21.673 clat (msec): min=814, max=13559, avg=9524.85, stdev=5226.58 00:22:21.673 lat (msec): min=819, max=13563, avg=9591.62, stdev=5200.76 00:22:21.673 clat percentiles (msec): 00:22:21.673 | 1.00th=[ 818], 5.00th=[ 827], 10.00th=[ 844], 20.00th=[ 969], 00:22:21.673 | 30.00th=[ 7282], 40.00th=[12818], 50.00th=[12953], 60.00th=[13087], 00:22:21.673 | 70.00th=[13221], 80.00th=[13355], 90.00th=[13489], 95.00th=[13489], 00:22:21.673 | 99.00th=[13624], 99.50th=[13624], 99.90th=[13624], 99.95th=[13624], 00:22:21.673 | 99.99th=[13624] 00:22:21.673 bw ( KiB/s): min= 2052, max=79872, per=0.65%, avg=18091.33, stdev=30415.42, samples=6 00:22:21.673 iops : min= 2, max= 78, avg=17.67, stdev=29.70, samples=6 00:22:21.673 lat (msec) : 1000=21.67%, >=2000=78.33% 00:22:21.673 cpu : usr=0.00%, sys=0.56%, ctx=344, majf=0, minf=32769 00:22:21.673 IO depths : 1=0.6%, 2=1.1%, 4=2.2%, 8=4.4%, 16=8.9%, 32=17.8%, >=64=65.0% 00:22:21.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.673 complete : 0=0.0%, 4=98.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.9% 00:22:21.673 issued rwts: total=180,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.673 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.673 job2: (groupid=0, jobs=1): err= 0: pid=825663: Thu Jul 25 19:13:13 2024 00:22:21.673 read: IOPS=8, BW=9028KiB/s (9245kB/s)(106MiB/12023msec) 00:22:21.673 slat (usec): min=529, max=2130.9k, avg=112957.05, stdev=430019.74 00:22:21.673 clat (msec): min=48, max=12013, avg=7192.06, stdev=3824.84 00:22:21.673 lat (msec): min=2114, max=12022, avg=7305.02, stdev=3788.50 00:22:21.673 clat percentiles (msec): 00:22:21.673 | 1.00th=[ 2123], 5.00th=[ 3742], 10.00th=[ 3809], 20.00th=[ 3910], 00:22:21.673 | 30.00th=[ 4044], 40.00th=[ 4144], 50.00th=[ 4279], 60.00th=[10671], 00:22:21.673 | 70.00th=[11610], 80.00th=[11745], 90.00th=[11879], 95.00th=[12013], 00:22:21.673 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:22:21.673 | 99.99th=[12013] 00:22:21.673 lat (msec) : 50=0.94%, >=2000=99.06% 00:22:21.673 cpu : usr=0.01%, sys=0.59%, ctx=222, majf=0, minf=27137 00:22:21.673 IO depths : 1=0.9%, 2=1.9%, 4=3.8%, 8=7.5%, 16=15.1%, 32=30.2%, >=64=40.6% 00:22:21.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.673 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:22:21.673 issued rwts: total=106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.673 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.673 job2: (groupid=0, jobs=1): err= 0: pid=825664: Thu Jul 25 19:13:13 2024 00:22:21.673 read: IOPS=53, BW=53.3MiB/s (55.9MB/s)(536MiB/10049msec) 00:22:21.673 slat (usec): min=38, max=2155.2k, avg=18657.12, stdev=167106.79 00:22:21.673 clat (msec): min=45, max=8762, avg=650.19, stdev=1390.03 00:22:21.673 lat (msec): min=55, max=8764, avg=668.84, stdev=1433.21 00:22:21.673 clat percentiles (msec): 00:22:21.673 | 1.00th=[ 62], 5.00th=[ 190], 10.00th=[ 271], 20.00th=[ 326], 00:22:21.673 | 30.00th=[ 338], 40.00th=[ 359], 50.00th=[ 380], 60.00th=[ 397], 00:22:21.673 | 70.00th=[ 414], 80.00th=[ 456], 90.00th=[ 701], 95.00th=[ 885], 00:22:21.673 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:22:21.673 | 99.99th=[ 8792] 00:22:21.673 bw ( KiB/s): min=165888, max=358400, per=10.00%, avg=279210.67, stdev=100692.75, samples=3 00:22:21.673 iops : min= 162, max= 350, avg=272.67, stdev=98.33, samples=3 00:22:21.673 lat (msec) : 50=0.19%, 100=2.80%, 250=4.85%, 500=74.07%, 750=10.26% 00:22:21.673 lat (msec) : 1000=4.29%, 2000=0.19%, >=2000=3.36% 00:22:21.673 cpu : usr=0.03%, sys=1.11%, ctx=587, majf=0, minf=32769 00:22:21.673 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=3.0%, 32=6.0%, >=64=88.2% 00:22:21.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.673 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:22:21.673 issued rwts: total=536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.673 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.673 job2: (groupid=0, jobs=1): err= 0: pid=825665: Thu Jul 25 19:13:13 2024 00:22:21.673 read: IOPS=3, BW=3334KiB/s (3414kB/s)(46.0MiB/14129msec) 00:22:21.673 slat (usec): min=371, max=2106.0k, avg=261959.82, stdev=665785.56 00:22:21.673 clat (msec): min=2078, max=14126, avg=8355.44, stdev=3866.55 00:22:21.673 lat (msec): min=4165, max=14128, avg=8617.40, stdev=3839.96 00:22:21.673 clat percentiles (msec): 00:22:21.673 | 1.00th=[ 2072], 5.00th=[ 4178], 10.00th=[ 4178], 20.00th=[ 6208], 00:22:21.673 | 30.00th=[ 6342], 40.00th=[ 6342], 50.00th=[ 6342], 60.00th=[ 6409], 00:22:21.673 | 70.00th=[10671], 80.00th=[14026], 90.00th=[14160], 95.00th=[14160], 00:22:21.673 | 99.00th=[14160], 99.50th=[14160], 99.90th=[14160], 99.95th=[14160], 00:22:21.673 | 99.99th=[14160] 00:22:21.673 lat (msec) : >=2000=100.00% 00:22:21.673 cpu : usr=0.00%, sys=0.19%, ctx=57, majf=0, minf=11777 00:22:21.673 IO depths : 1=2.2%, 2=4.3%, 4=8.7%, 8=17.4%, 16=34.8%, 32=32.6%, >=64=0.0% 00:22:21.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.673 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:21.674 issued rwts: total=46,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.674 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.674 job2: (groupid=0, jobs=1): err= 0: pid=825666: Thu Jul 25 19:13:13 2024 00:22:21.674 read: IOPS=5, BW=5432KiB/s (5563kB/s)(64.0MiB/12064msec) 00:22:21.674 slat (usec): min=428, max=2120.1k, avg=187585.36, stdev=577595.62 00:22:21.674 clat (msec): min=56, max=12059, avg=8570.43, stdev=3288.07 00:22:21.674 lat (msec): min=2150, max=12062, avg=8758.01, stdev=3133.47 00:22:21.674 clat percentiles (msec): 00:22:21.674 | 1.00th=[ 57], 5.00th=[ 2165], 10.00th=[ 4329], 20.00th=[ 6409], 00:22:21.674 | 30.00th=[ 6477], 40.00th=[ 8557], 50.00th=[ 8658], 60.00th=[10805], 00:22:21.674 | 70.00th=[10805], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:22:21.674 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:22:21.674 | 99.99th=[12013] 00:22:21.674 lat (msec) : 100=1.56%, >=2000=98.44% 00:22:21.674 cpu : usr=0.00%, sys=0.36%, ctx=58, majf=0, minf=16385 00:22:21.674 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:22:21.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.674 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:21.674 issued rwts: total=64,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.674 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.674 job2: (groupid=0, jobs=1): err= 0: pid=825667: Thu Jul 25 19:13:13 2024 00:22:21.674 read: IOPS=31, BW=31.6MiB/s (33.1MB/s)(382MiB/12083msec) 00:22:21.674 slat (usec): min=38, max=2064.1k, avg=31487.44, stdev=210023.48 00:22:21.674 clat (msec): min=52, max=5790, avg=3756.00, stdev=1853.64 00:22:21.674 lat (msec): min=834, max=5790, avg=3787.49, stdev=1839.75 00:22:21.674 clat percentiles (msec): 00:22:21.674 | 1.00th=[ 835], 5.00th=[ 852], 10.00th=[ 869], 20.00th=[ 911], 00:22:21.674 | 30.00th=[ 2869], 40.00th=[ 4279], 50.00th=[ 4933], 60.00th=[ 5000], 00:22:21.674 | 70.00th=[ 5067], 80.00th=[ 5269], 90.00th=[ 5537], 95.00th=[ 5671], 00:22:21.674 | 99.00th=[ 5805], 99.50th=[ 5805], 99.90th=[ 5805], 99.95th=[ 5805], 00:22:21.674 | 99.99th=[ 5805] 00:22:21.674 bw ( KiB/s): min= 2048, max=149504, per=2.33%, avg=65024.00, stdev=63988.59, samples=8 00:22:21.674 iops : min= 2, max= 146, avg=63.50, stdev=62.49, samples=8 00:22:21.674 lat (msec) : 100=0.26%, 1000=20.68%, 2000=7.33%, >=2000=71.73% 00:22:21.674 cpu : usr=0.00%, sys=0.98%, ctx=424, majf=0, minf=32769 00:22:21.674 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.2%, 32=8.4%, >=64=83.5% 00:22:21.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.674 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:22:21.674 issued rwts: total=382,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.674 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.674 job2: (groupid=0, jobs=1): err= 0: pid=825668: Thu Jul 25 19:13:13 2024 00:22:21.674 read: IOPS=15, BW=15.3MiB/s (16.0MB/s)(186MiB/12155msec) 00:22:21.674 slat (usec): min=37, max=2100.2k, avg=53777.53, stdev=299438.25 00:22:21.674 clat (msec): min=2151, max=8576, avg=5742.57, stdev=1042.51 00:22:21.674 lat (msec): min=2159, max=8577, avg=5796.35, stdev=1022.52 00:22:21.674 clat percentiles (msec): 00:22:21.674 | 1.00th=[ 2165], 5.00th=[ 4279], 10.00th=[ 4279], 20.00th=[ 5873], 00:22:21.674 | 30.00th=[ 5873], 40.00th=[ 6074], 50.00th=[ 6074], 60.00th=[ 6141], 00:22:21.674 | 70.00th=[ 6141], 80.00th=[ 6275], 90.00th=[ 6342], 95.00th=[ 6477], 00:22:21.674 | 99.00th=[ 8557], 99.50th=[ 8557], 99.90th=[ 8557], 99.95th=[ 8557], 00:22:21.674 | 99.99th=[ 8557] 00:22:21.674 bw ( KiB/s): min=10240, max=92160, per=1.44%, avg=40277.33, stdev=45118.02, samples=3 00:22:21.674 iops : min= 10, max= 90, avg=39.33, stdev=44.06, samples=3 00:22:21.674 lat (msec) : >=2000=100.00% 00:22:21.674 cpu : usr=0.00%, sys=0.72%, ctx=132, majf=0, minf=32769 00:22:21.674 IO depths : 1=0.5%, 2=1.1%, 4=2.2%, 8=4.3%, 16=8.6%, 32=17.2%, >=64=66.1% 00:22:21.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.674 complete : 0=0.0%, 4=98.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.7% 00:22:21.674 issued rwts: total=186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.674 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.674 job2: (groupid=0, jobs=1): err= 0: pid=825669: Thu Jul 25 19:13:13 2024 00:22:21.674 read: IOPS=41, BW=41.1MiB/s (43.1MB/s)(415MiB/10092msec) 00:22:21.674 slat (usec): min=35, max=2096.2k, avg=24095.83, stdev=181568.85 00:22:21.674 clat (msec): min=90, max=8258, avg=2224.46, stdev=2972.30 00:22:21.674 lat (msec): min=94, max=8267, avg=2248.56, stdev=2985.55 00:22:21.674 clat percentiles (msec): 00:22:21.674 | 1.00th=[ 207], 5.00th=[ 313], 10.00th=[ 422], 20.00th=[ 558], 00:22:21.674 | 30.00th=[ 693], 40.00th=[ 760], 50.00th=[ 827], 60.00th=[ 852], 00:22:21.674 | 70.00th=[ 885], 80.00th=[ 5134], 90.00th=[ 8154], 95.00th=[ 8221], 00:22:21.674 | 99.00th=[ 8221], 99.50th=[ 8221], 99.90th=[ 8288], 99.95th=[ 8288], 00:22:21.674 | 99.99th=[ 8288] 00:22:21.674 bw ( KiB/s): min=102400, max=188416, per=5.28%, avg=147456.00, stdev=39712.19, samples=4 00:22:21.674 iops : min= 100, max= 184, avg=144.00, stdev=38.78, samples=4 00:22:21.674 lat (msec) : 100=0.72%, 250=3.86%, 500=11.33%, 750=18.55%, 1000=43.61% 00:22:21.674 lat (msec) : >=2000=21.93% 00:22:21.674 cpu : usr=0.00%, sys=1.12%, ctx=440, majf=0, minf=32769 00:22:21.674 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.9%, 32=7.7%, >=64=84.8% 00:22:21.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.674 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:22:21.674 issued rwts: total=415,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.674 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.674 job3: (groupid=0, jobs=1): err= 0: pid=825670: Thu Jul 25 19:13:13 2024 00:22:21.674 read: IOPS=36, BW=36.5MiB/s (38.3MB/s)(442MiB/12097msec) 00:22:21.674 slat (usec): min=29, max=2076.5k, avg=27158.24, stdev=195247.53 00:22:21.674 clat (msec): min=91, max=5325, avg=2618.64, stdev=1480.56 00:22:21.674 lat (msec): min=407, max=6484, avg=2645.79, stdev=1486.74 00:22:21.674 clat percentiles (msec): 00:22:21.674 | 1.00th=[ 409], 5.00th=[ 885], 10.00th=[ 969], 20.00th=[ 1217], 00:22:21.674 | 30.00th=[ 1301], 40.00th=[ 2089], 50.00th=[ 2165], 60.00th=[ 2567], 00:22:21.674 | 70.00th=[ 3842], 80.00th=[ 4530], 90.00th=[ 4732], 95.00th=[ 4933], 00:22:21.674 | 99.00th=[ 5269], 99.50th=[ 5336], 99.90th=[ 5336], 99.95th=[ 5336], 00:22:21.674 | 99.99th=[ 5336] 00:22:21.674 bw ( KiB/s): min=20480, max=155648, per=3.29%, avg=91867.43, stdev=45525.66, samples=7 00:22:21.674 iops : min= 20, max= 152, avg=89.71, stdev=44.46, samples=7 00:22:21.674 lat (msec) : 100=0.23%, 500=2.26%, 1000=8.60%, 2000=24.89%, >=2000=64.03% 00:22:21.674 cpu : usr=0.00%, sys=0.72%, ctx=534, majf=0, minf=32769 00:22:21.674 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.2%, >=64=85.7% 00:22:21.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.674 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:22:21.674 issued rwts: total=442,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.674 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.674 job3: (groupid=0, jobs=1): err= 0: pid=825671: Thu Jul 25 19:13:13 2024 00:22:21.674 read: IOPS=78, BW=78.9MiB/s (82.7MB/s)(794MiB/10062msec) 00:22:21.674 slat (usec): min=32, max=2085.7k, avg=12611.27, stdev=112761.39 00:22:21.674 clat (msec): min=43, max=6204, avg=943.45, stdev=1125.21 00:22:21.674 lat (msec): min=106, max=6208, avg=956.06, stdev=1140.34 00:22:21.674 clat percentiles (msec): 00:22:21.674 | 1.00th=[ 163], 5.00th=[ 288], 10.00th=[ 481], 20.00th=[ 542], 00:22:21.674 | 30.00th=[ 567], 40.00th=[ 609], 50.00th=[ 659], 60.00th=[ 776], 00:22:21.674 | 70.00th=[ 894], 80.00th=[ 936], 90.00th=[ 969], 95.00th=[ 2836], 00:22:21.674 | 99.00th=[ 6141], 99.50th=[ 6208], 99.90th=[ 6208], 99.95th=[ 6208], 00:22:21.674 | 99.99th=[ 6208] 00:22:21.674 bw ( KiB/s): min=110592, max=249856, per=6.11%, avg=170496.00, stdev=51191.22, samples=8 00:22:21.674 iops : min= 108, max= 244, avg=166.50, stdev=49.99, samples=8 00:22:21.675 lat (msec) : 50=0.13%, 250=3.40%, 500=7.81%, 750=45.34%, 1000=37.03% 00:22:21.675 lat (msec) : 2000=0.63%, >=2000=5.67% 00:22:21.675 cpu : usr=0.04%, sys=1.56%, ctx=849, majf=0, minf=32769 00:22:21.675 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.1% 00:22:21.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.675 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:21.675 issued rwts: total=794,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.675 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.675 job3: (groupid=0, jobs=1): err= 0: pid=825672: Thu Jul 25 19:13:13 2024 00:22:21.675 read: IOPS=79, BW=79.5MiB/s (83.3MB/s)(798MiB/10041msec) 00:22:21.675 slat (usec): min=32, max=2070.7k, avg=12530.73, stdev=111020.55 00:22:21.675 clat (msec): min=37, max=6220, avg=916.91, stdev=1050.42 00:22:21.675 lat (msec): min=41, max=6222, avg=929.44, stdev=1067.45 00:22:21.675 clat percentiles (msec): 00:22:21.675 | 1.00th=[ 75], 5.00th=[ 284], 10.00th=[ 313], 20.00th=[ 334], 00:22:21.675 | 30.00th=[ 363], 40.00th=[ 460], 50.00th=[ 617], 60.00th=[ 885], 00:22:21.675 | 70.00th=[ 961], 80.00th=[ 1167], 90.00th=[ 1418], 95.00th=[ 2869], 00:22:21.675 | 99.00th=[ 6141], 99.50th=[ 6141], 99.90th=[ 6208], 99.95th=[ 6208], 00:22:21.675 | 99.99th=[ 6208] 00:22:21.675 bw ( KiB/s): min=81920, max=385024, per=6.16%, avg=171776.00, stdev=99234.83, samples=8 00:22:21.675 iops : min= 80, max= 376, avg=167.75, stdev=96.91, samples=8 00:22:21.675 lat (msec) : 50=0.38%, 100=1.88%, 250=0.88%, 500=39.72%, 750=9.27% 00:22:21.675 lat (msec) : 1000=21.93%, 2000=19.80%, >=2000=6.14% 00:22:21.675 cpu : usr=0.06%, sys=1.45%, ctx=956, majf=0, minf=32769 00:22:21.675 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.1% 00:22:21.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.675 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:21.675 issued rwts: total=798,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.675 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.675 job3: (groupid=0, jobs=1): err= 0: pid=825673: Thu Jul 25 19:13:13 2024 00:22:21.675 read: IOPS=62, BW=62.3MiB/s (65.3MB/s)(626MiB/10053msec) 00:22:21.675 slat (usec): min=35, max=2073.8k, avg=15994.56, stdev=126158.31 00:22:21.675 clat (msec): min=36, max=6041, avg=1167.26, stdev=1085.47 00:22:21.675 lat (msec): min=79, max=6042, avg=1183.25, stdev=1101.85 00:22:21.675 clat percentiles (msec): 00:22:21.675 | 1.00th=[ 104], 5.00th=[ 313], 10.00th=[ 447], 20.00th=[ 634], 00:22:21.675 | 30.00th=[ 684], 40.00th=[ 718], 50.00th=[ 978], 60.00th=[ 1116], 00:22:21.675 | 70.00th=[ 1167], 80.00th=[ 1318], 90.00th=[ 1536], 95.00th=[ 2735], 00:22:21.675 | 99.00th=[ 6007], 99.50th=[ 6007], 99.90th=[ 6074], 99.95th=[ 6074], 00:22:21.675 | 99.99th=[ 6074] 00:22:21.675 bw ( KiB/s): min= 4096, max=204800, per=4.06%, avg=113228.00, stdev=65680.94, samples=9 00:22:21.675 iops : min= 4, max= 200, avg=110.44, stdev=64.14, samples=9 00:22:21.675 lat (msec) : 50=0.16%, 100=0.80%, 250=3.67%, 500=5.43%, 750=33.39% 00:22:21.675 lat (msec) : 1000=7.03%, 2000=42.65%, >=2000=6.87% 00:22:21.675 cpu : usr=0.01%, sys=1.32%, ctx=642, majf=0, minf=32769 00:22:21.675 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.1%, >=64=89.9% 00:22:21.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.675 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:22:21.675 issued rwts: total=626,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.675 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.675 job3: (groupid=0, jobs=1): err= 0: pid=825674: Thu Jul 25 19:13:13 2024 00:22:21.675 read: IOPS=30, BW=31.0MiB/s (32.5MB/s)(374MiB/12078msec) 00:22:21.675 slat (usec): min=31, max=2202.2k, avg=32117.08, stdev=229591.04 00:22:21.675 clat (msec): min=64, max=5140, avg=2053.39, stdev=1855.53 00:22:21.675 lat (msec): min=590, max=6464, avg=2085.50, stdev=1868.58 00:22:21.675 clat percentiles (msec): 00:22:21.675 | 1.00th=[ 592], 5.00th=[ 600], 10.00th=[ 625], 20.00th=[ 659], 00:22:21.675 | 30.00th=[ 684], 40.00th=[ 701], 50.00th=[ 726], 60.00th=[ 852], 00:22:21.675 | 70.00th=[ 4396], 80.00th=[ 4665], 90.00th=[ 4799], 95.00th=[ 4866], 00:22:21.675 | 99.00th=[ 4933], 99.50th=[ 4933], 99.90th=[ 5134], 99.95th=[ 5134], 00:22:21.675 | 99.99th=[ 5134] 00:22:21.675 bw ( KiB/s): min=14054, max=218698, per=3.61%, avg=100617.60, stdev=97057.37, samples=5 00:22:21.675 iops : min= 13, max= 213, avg=98.00, stdev=94.77, samples=5 00:22:21.675 lat (msec) : 100=0.27%, 750=52.94%, 1000=10.43%, >=2000=36.36% 00:22:21.675 cpu : usr=0.01%, sys=0.94%, ctx=339, majf=0, minf=32769 00:22:21.675 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.3%, 32=8.6%, >=64=83.2% 00:22:21.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.675 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:22:21.675 issued rwts: total=374,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.675 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.675 job3: (groupid=0, jobs=1): err= 0: pid=825675: Thu Jul 25 19:13:13 2024 00:22:21.675 read: IOPS=102, BW=102MiB/s (107MB/s)(1025MiB/10013msec) 00:22:21.675 slat (usec): min=31, max=2008.2k, avg=9756.97, stdev=92639.26 00:22:21.675 clat (msec): min=8, max=5664, avg=1014.10, stdev=1432.63 00:22:21.675 lat (msec): min=27, max=5792, avg=1023.86, stdev=1441.90 00:22:21.675 clat percentiles (msec): 00:22:21.675 | 1.00th=[ 100], 5.00th=[ 130], 10.00th=[ 131], 20.00th=[ 138], 00:22:21.675 | 30.00th=[ 251], 40.00th=[ 443], 50.00th=[ 542], 60.00th=[ 642], 00:22:21.675 | 70.00th=[ 810], 80.00th=[ 1267], 90.00th=[ 3104], 95.00th=[ 5537], 00:22:21.675 | 99.00th=[ 5604], 99.50th=[ 5671], 99.90th=[ 5671], 99.95th=[ 5671], 00:22:21.675 | 99.99th=[ 5671] 00:22:21.675 bw ( KiB/s): min=73728, max=653312, per=7.66%, avg=213760.00, stdev=192682.92, samples=8 00:22:21.675 iops : min= 72, max= 638, avg=208.75, stdev=188.17, samples=8 00:22:21.675 lat (msec) : 10=0.10%, 50=0.49%, 100=0.49%, 250=28.88%, 500=14.34% 00:22:21.675 lat (msec) : 750=23.41%, 1000=7.51%, 2000=14.24%, >=2000=10.54% 00:22:21.675 cpu : usr=0.01%, sys=1.46%, ctx=1268, majf=0, minf=32769 00:22:21.675 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.9% 00:22:21.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.675 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:21.675 issued rwts: total=1025,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.675 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.675 job3: (groupid=0, jobs=1): err= 0: pid=825676: Thu Jul 25 19:13:13 2024 00:22:21.675 read: IOPS=64, BW=64.4MiB/s (67.5MB/s)(646MiB/10031msec) 00:22:21.675 slat (usec): min=33, max=2115.7k, avg=15478.23, stdev=126123.40 00:22:21.675 clat (msec): min=29, max=7300, avg=776.94, stdev=751.23 00:22:21.675 lat (msec): min=32, max=7302, avg=792.41, stdev=795.16 00:22:21.675 clat percentiles (msec): 00:22:21.675 | 1.00th=[ 79], 5.00th=[ 142], 10.00th=[ 167], 20.00th=[ 209], 00:22:21.675 | 30.00th=[ 359], 40.00th=[ 584], 50.00th=[ 676], 60.00th=[ 768], 00:22:21.675 | 70.00th=[ 835], 80.00th=[ 936], 90.00th=[ 1854], 95.00th=[ 1955], 00:22:21.675 | 99.00th=[ 3910], 99.50th=[ 6007], 99.90th=[ 7282], 99.95th=[ 7282], 00:22:21.675 | 99.99th=[ 7282] 00:22:21.675 bw ( KiB/s): min=24576, max=538624, per=6.35%, avg=177152.00, stdev=187625.19, samples=6 00:22:21.675 iops : min= 24, max= 526, avg=173.00, stdev=183.23, samples=6 00:22:21.675 lat (msec) : 50=0.62%, 100=0.77%, 250=21.36%, 500=13.78%, 750=21.67% 00:22:21.675 lat (msec) : 1000=24.15%, 2000=15.17%, >=2000=2.48% 00:22:21.675 cpu : usr=0.00%, sys=0.99%, ctx=1050, majf=0, minf=32769 00:22:21.675 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=5.0%, >=64=90.2% 00:22:21.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.675 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:22:21.675 issued rwts: total=646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.675 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.675 job3: (groupid=0, jobs=1): err= 0: pid=825677: Thu Jul 25 19:13:13 2024 00:22:21.675 read: IOPS=66, BW=66.4MiB/s (69.6MB/s)(666MiB/10027msec) 00:22:21.675 slat (usec): min=30, max=2075.9k, avg=15040.75, stdev=122272.83 00:22:21.675 clat (msec): min=6, max=6180, avg=886.15, stdev=657.88 00:22:21.675 lat (msec): min=58, max=6192, avg=901.19, stdev=688.77 00:22:21.675 clat percentiles (msec): 00:22:21.675 | 1.00th=[ 63], 5.00th=[ 296], 10.00th=[ 498], 20.00th=[ 634], 00:22:21.675 | 30.00th=[ 693], 40.00th=[ 802], 50.00th=[ 860], 60.00th=[ 927], 00:22:21.675 | 70.00th=[ 969], 80.00th=[ 1003], 90.00th=[ 1062], 95.00th=[ 1116], 00:22:21.675 | 99.00th=[ 4933], 99.50th=[ 6141], 99.90th=[ 6208], 99.95th=[ 6208], 00:22:21.675 | 99.99th=[ 6208] 00:22:21.675 bw ( KiB/s): min=102400, max=198656, per=4.79%, avg=133705.14, stdev=36150.04, samples=7 00:22:21.675 iops : min= 100, max= 194, avg=130.57, stdev=35.30, samples=7 00:22:21.675 lat (msec) : 10=0.15%, 100=2.40%, 250=2.25%, 500=7.06%, 750=22.52% 00:22:21.675 lat (msec) : 1000=45.35%, 2000=17.87%, >=2000=2.40% 00:22:21.675 cpu : usr=0.00%, sys=1.23%, ctx=726, majf=0, minf=32769 00:22:21.675 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.5% 00:22:21.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.675 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:22:21.675 issued rwts: total=666,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.675 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.675 job3: (groupid=0, jobs=1): err= 0: pid=825678: Thu Jul 25 19:13:13 2024 00:22:21.675 read: IOPS=28, BW=28.3MiB/s (29.7MB/s)(284MiB/10022msec) 00:22:21.675 slat (usec): min=33, max=2084.7k, avg=35256.59, stdev=221963.47 00:22:21.675 clat (msec): min=7, max=6929, avg=1880.03, stdev=1075.78 00:22:21.675 lat (msec): min=26, max=7031, avg=1915.28, stdev=1110.55 00:22:21.675 clat percentiles (msec): 00:22:21.675 | 1.00th=[ 29], 5.00th=[ 667], 10.00th=[ 743], 20.00th=[ 944], 00:22:21.675 | 30.00th=[ 1267], 40.00th=[ 1418], 50.00th=[ 1586], 60.00th=[ 2333], 00:22:21.675 | 70.00th=[ 2500], 80.00th=[ 2635], 90.00th=[ 2735], 95.00th=[ 2769], 00:22:21.675 | 99.00th=[ 5805], 99.50th=[ 6946], 99.90th=[ 6946], 99.95th=[ 6946], 00:22:21.675 | 99.99th=[ 6946] 00:22:21.675 bw ( KiB/s): min=129024, max=167936, per=5.32%, avg=148480.00, stdev=27514.94, samples=2 00:22:21.675 iops : min= 126, max= 164, avg=145.00, stdev=26.87, samples=2 00:22:21.675 lat (msec) : 10=0.35%, 50=2.11%, 100=1.41%, 250=0.35%, 750=7.75% 00:22:21.675 lat (msec) : 1000=11.62%, 2000=27.11%, >=2000=49.30% 00:22:21.675 cpu : usr=0.00%, sys=0.86%, ctx=448, majf=0, minf=32769 00:22:21.675 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.6%, 32=11.3%, >=64=77.8% 00:22:21.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.676 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:22:21.676 issued rwts: total=284,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.676 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.676 job3: (groupid=0, jobs=1): err= 0: pid=825679: Thu Jul 25 19:13:13 2024 00:22:21.676 read: IOPS=111, BW=112MiB/s (117MB/s)(1126MiB/10068msec) 00:22:21.676 slat (usec): min=34, max=2073.1k, avg=8884.42, stdev=94322.60 00:22:21.676 clat (msec): min=59, max=6381, avg=632.91, stdev=946.57 00:22:21.676 lat (msec): min=167, max=6383, avg=641.79, stdev=961.81 00:22:21.676 clat percentiles (msec): 00:22:21.676 | 1.00th=[ 180], 5.00th=[ 266], 10.00th=[ 292], 20.00th=[ 321], 00:22:21.676 | 30.00th=[ 363], 40.00th=[ 426], 50.00th=[ 460], 60.00th=[ 498], 00:22:21.676 | 70.00th=[ 567], 80.00th=[ 592], 90.00th=[ 667], 95.00th=[ 793], 00:22:21.676 | 99.00th=[ 6342], 99.50th=[ 6409], 99.90th=[ 6409], 99.95th=[ 6409], 00:22:21.676 | 99.99th=[ 6409] 00:22:21.676 bw ( KiB/s): min=92160, max=378123, per=9.16%, avg=255597.25, stdev=98090.07, samples=8 00:22:21.676 iops : min= 90, max= 369, avg=249.50, stdev=95.79, samples=8 00:22:21.676 lat (msec) : 100=0.09%, 250=3.02%, 500=58.61%, 750=32.59%, 1000=1.95% 00:22:21.676 lat (msec) : >=2000=3.73% 00:22:21.676 cpu : usr=0.04%, sys=1.49%, ctx=881, majf=0, minf=32769 00:22:21.676 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.4% 00:22:21.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.676 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:21.676 issued rwts: total=1126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.676 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.676 job3: (groupid=0, jobs=1): err= 0: pid=825680: Thu Jul 25 19:13:13 2024 00:22:21.676 read: IOPS=21, BW=21.1MiB/s (22.1MB/s)(254MiB/12054msec) 00:22:21.676 slat (usec): min=29, max=2113.6k, avg=47216.11, stdev=267539.64 00:22:21.676 clat (msec): min=60, max=6852, avg=3613.19, stdev=1829.55 00:22:21.676 lat (msec): min=1129, max=6857, avg=3660.40, stdev=1820.34 00:22:21.676 clat percentiles (msec): 00:22:21.676 | 1.00th=[ 1133], 5.00th=[ 1217], 10.00th=[ 1318], 20.00th=[ 1469], 00:22:21.676 | 30.00th=[ 1536], 40.00th=[ 3540], 50.00th=[ 4396], 60.00th=[ 4597], 00:22:21.676 | 70.00th=[ 4799], 80.00th=[ 5067], 90.00th=[ 5336], 95.00th=[ 6745], 00:22:21.676 | 99.00th=[ 6812], 99.50th=[ 6879], 99.90th=[ 6879], 99.95th=[ 6879], 00:22:21.676 | 99.99th=[ 6879] 00:22:21.676 bw ( KiB/s): min= 8192, max=118784, per=2.31%, avg=64512.00, stdev=45288.13, samples=4 00:22:21.676 iops : min= 8, max= 116, avg=63.00, stdev=44.23, samples=4 00:22:21.676 lat (msec) : 100=0.39%, 2000=35.83%, >=2000=63.78% 00:22:21.676 cpu : usr=0.02%, sys=0.65%, ctx=475, majf=0, minf=32769 00:22:21.676 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.1%, 16=6.3%, 32=12.6%, >=64=75.2% 00:22:21.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.676 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:22:21.676 issued rwts: total=254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.676 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.676 job3: (groupid=0, jobs=1): err= 0: pid=825681: Thu Jul 25 19:13:13 2024 00:22:21.676 read: IOPS=21, BW=21.6MiB/s (22.7MB/s)(262MiB/12113msec) 00:22:21.676 slat (usec): min=45, max=2071.6k, avg=45854.97, stdev=260469.34 00:22:21.676 clat (msec): min=97, max=7106, avg=3869.70, stdev=1950.19 00:22:21.676 lat (msec): min=926, max=7111, avg=3915.56, stdev=1937.87 00:22:21.676 clat percentiles (msec): 00:22:21.676 | 1.00th=[ 919], 5.00th=[ 1028], 10.00th=[ 1167], 20.00th=[ 1653], 00:22:21.676 | 30.00th=[ 2165], 40.00th=[ 3876], 50.00th=[ 4463], 60.00th=[ 4597], 00:22:21.676 | 70.00th=[ 4799], 80.00th=[ 5000], 90.00th=[ 7013], 95.00th=[ 7080], 00:22:21.676 | 99.00th=[ 7080], 99.50th=[ 7080], 99.90th=[ 7080], 99.95th=[ 7080], 00:22:21.676 | 99.99th=[ 7080] 00:22:21.676 bw ( KiB/s): min= 2052, max=133120, per=1.98%, avg=55296.80, stdev=57616.71, samples=5 00:22:21.676 iops : min= 2, max= 130, avg=54.00, stdev=56.27, samples=5 00:22:21.676 lat (msec) : 100=0.38%, 1000=3.82%, 2000=22.52%, >=2000=73.28% 00:22:21.676 cpu : usr=0.01%, sys=0.72%, ctx=537, majf=0, minf=32769 00:22:21.676 IO depths : 1=0.4%, 2=0.8%, 4=1.5%, 8=3.1%, 16=6.1%, 32=12.2%, >=64=76.0% 00:22:21.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.676 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:22:21.676 issued rwts: total=262,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.676 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.676 job3: (groupid=0, jobs=1): err= 0: pid=825682: Thu Jul 25 19:13:13 2024 00:22:21.676 read: IOPS=73, BW=73.5MiB/s (77.1MB/s)(890MiB/12101msec) 00:22:21.676 slat (usec): min=57, max=2089.9k, avg=11274.92, stdev=107054.51 00:22:21.676 clat (msec): min=532, max=6134, avg=1219.18, stdev=1315.50 00:22:21.676 lat (msec): min=532, max=6136, avg=1230.46, stdev=1325.19 00:22:21.676 clat percentiles (msec): 00:22:21.676 | 1.00th=[ 542], 5.00th=[ 550], 10.00th=[ 567], 20.00th=[ 609], 00:22:21.676 | 30.00th=[ 617], 40.00th=[ 642], 50.00th=[ 667], 60.00th=[ 693], 00:22:21.676 | 70.00th=[ 760], 80.00th=[ 2165], 90.00th=[ 2668], 95.00th=[ 4866], 00:22:21.676 | 99.00th=[ 6074], 99.50th=[ 6141], 99.90th=[ 6141], 99.95th=[ 6141], 00:22:21.676 | 99.99th=[ 6141] 00:22:21.676 bw ( KiB/s): min= 1848, max=221184, per=6.22%, avg=173602.67, stdev=67179.11, samples=9 00:22:21.676 iops : min= 1, max= 216, avg=169.44, stdev=65.86, samples=9 00:22:21.676 lat (msec) : 750=67.87%, 1000=11.46%, >=2000=20.67% 00:22:21.676 cpu : usr=0.02%, sys=1.07%, ctx=688, majf=0, minf=32769 00:22:21.676 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=92.9% 00:22:21.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.676 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:21.676 issued rwts: total=890,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.676 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.676 job4: (groupid=0, jobs=1): err= 0: pid=825683: Thu Jul 25 19:13:13 2024 00:22:21.676 read: IOPS=3, BW=3632KiB/s (3719kB/s)(43.0MiB/12125msec) 00:22:21.676 slat (usec): min=477, max=2107.7k, avg=279741.53, stdev=682896.31 00:22:21.676 clat (msec): min=95, max=12121, avg=7098.12, stdev=3162.43 00:22:21.676 lat (msec): min=2158, max=12124, avg=7377.86, stdev=3058.53 00:22:21.676 clat percentiles (msec): 00:22:21.676 | 1.00th=[ 95], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 4329], 00:22:21.676 | 30.00th=[ 6544], 40.00th=[ 6544], 50.00th=[ 6544], 60.00th=[ 6544], 00:22:21.676 | 70.00th=[ 8658], 80.00th=[10805], 90.00th=[12013], 95.00th=[12013], 00:22:21.676 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:22:21.676 | 99.99th=[12147] 00:22:21.676 lat (msec) : 100=2.33%, >=2000=97.67% 00:22:21.676 cpu : usr=0.00%, sys=0.19%, ctx=57, majf=0, minf=11009 00:22:21.676 IO depths : 1=2.3%, 2=4.7%, 4=9.3%, 8=18.6%, 16=37.2%, 32=27.9%, >=64=0.0% 00:22:21.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.676 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:22:21.676 issued rwts: total=43,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.676 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.676 job4: (groupid=0, jobs=1): err= 0: pid=825684: Thu Jul 25 19:13:13 2024 00:22:21.676 read: IOPS=57, BW=57.9MiB/s (60.7MB/s)(701MiB/12104msec) 00:22:21.676 slat (usec): min=31, max=2071.1k, avg=17130.10, stdev=141950.80 00:22:21.676 clat (msec): min=91, max=6081, avg=1369.41, stdev=1177.45 00:22:21.676 lat (msec): min=542, max=6082, avg=1386.54, stdev=1190.49 00:22:21.676 clat percentiles (msec): 00:22:21.676 | 1.00th=[ 542], 5.00th=[ 550], 10.00th=[ 558], 20.00th=[ 609], 00:22:21.676 | 30.00th=[ 651], 40.00th=[ 709], 50.00th=[ 844], 60.00th=[ 1062], 00:22:21.676 | 70.00th=[ 1183], 80.00th=[ 2433], 90.00th=[ 2970], 95.00th=[ 3339], 00:22:21.676 | 99.00th=[ 6074], 99.50th=[ 6074], 99.90th=[ 6074], 99.95th=[ 6074], 00:22:21.676 | 99.99th=[ 6074] 00:22:21.676 bw ( KiB/s): min=83968, max=221184, per=5.26%, avg=146688.00, stdev=50474.40, samples=8 00:22:21.676 iops : min= 82, max= 216, avg=143.25, stdev=49.29, samples=8 00:22:21.676 lat (msec) : 100=0.14%, 750=42.51%, 1000=13.55%, 2000=20.40%, >=2000=23.40% 00:22:21.676 cpu : usr=0.02%, sys=1.11%, ctx=740, majf=0, minf=32769 00:22:21.676 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.6%, >=64=91.0% 00:22:21.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.676 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:22:21.676 issued rwts: total=701,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.676 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.676 job4: (groupid=0, jobs=1): err= 0: pid=825685: Thu Jul 25 19:13:13 2024 00:22:21.676 read: IOPS=33, BW=33.9MiB/s (35.6MB/s)(411MiB/12118msec) 00:22:21.676 slat (usec): min=45, max=3980.2k, avg=29237.14, stdev=232280.07 00:22:21.676 clat (msec): min=98, max=6245, avg=3497.66, stdev=1264.45 00:22:21.676 lat (msec): min=654, max=6452, avg=3526.90, stdev=1263.32 00:22:21.676 clat percentiles (msec): 00:22:21.676 | 1.00th=[ 735], 5.00th=[ 1469], 10.00th=[ 2366], 20.00th=[ 2467], 00:22:21.676 | 30.00th=[ 2635], 40.00th=[ 2937], 50.00th=[ 3239], 60.00th=[ 3440], 00:22:21.676 | 70.00th=[ 4111], 80.00th=[ 4665], 90.00th=[ 5537], 95.00th=[ 5940], 00:22:21.676 | 99.00th=[ 6141], 99.50th=[ 6208], 99.90th=[ 6275], 99.95th=[ 6275], 00:22:21.676 | 99.99th=[ 6275] 00:22:21.676 bw ( KiB/s): min= 4096, max=198656, per=1.89%, avg=52695.73, stdev=54051.66, samples=11 00:22:21.676 iops : min= 4, max= 194, avg=51.45, stdev=52.79, samples=11 00:22:21.676 lat (msec) : 100=0.24%, 750=0.97%, 2000=3.89%, >=2000=94.89% 00:22:21.676 cpu : usr=0.02%, sys=0.82%, ctx=831, majf=0, minf=32769 00:22:21.676 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.9%, 32=7.8%, >=64=84.7% 00:22:21.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.676 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:22:21.676 issued rwts: total=411,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.676 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.676 job4: (groupid=0, jobs=1): err= 0: pid=825686: Thu Jul 25 19:13:13 2024 00:22:21.676 read: IOPS=196, BW=196MiB/s (206MB/s)(2365MiB/12054msec) 00:22:21.676 slat (usec): min=33, max=2107.7k, avg=5056.14, stdev=74496.66 00:22:21.676 clat (msec): min=86, max=6794, avg=626.26, stdev=1402.57 00:22:21.677 lat (msec): min=86, max=6797, avg=631.32, stdev=1407.80 00:22:21.677 clat percentiles (msec): 00:22:21.677 | 1.00th=[ 101], 5.00th=[ 130], 10.00th=[ 148], 20.00th=[ 190], 00:22:21.677 | 30.00th=[ 239], 40.00th=[ 264], 50.00th=[ 284], 60.00th=[ 326], 00:22:21.677 | 70.00th=[ 376], 80.00th=[ 393], 90.00th=[ 426], 95.00th=[ 4597], 00:22:21.677 | 99.00th=[ 6745], 99.50th=[ 6745], 99.90th=[ 6812], 99.95th=[ 6812], 00:22:21.677 | 99.99th=[ 6812] 00:22:21.677 bw ( KiB/s): min=10240, max=755712, per=12.61%, avg=351886.62, stdev=232739.01, samples=13 00:22:21.677 iops : min= 10, max= 738, avg=343.54, stdev=227.37, samples=13 00:22:21.677 lat (msec) : 100=1.10%, 250=30.87%, 500=61.90%, 750=0.17%, >=2000=5.96% 00:22:21.677 cpu : usr=0.03%, sys=1.90%, ctx=1667, majf=0, minf=32769 00:22:21.677 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:22:21.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.677 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:21.677 issued rwts: total=2365,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.677 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.677 job4: (groupid=0, jobs=1): err= 0: pid=825687: Thu Jul 25 19:13:13 2024 00:22:21.677 read: IOPS=34, BW=34.9MiB/s (36.6MB/s)(420MiB/12018msec) 00:22:21.677 slat (usec): min=40, max=2179.5k, avg=28378.08, stdev=203728.55 00:22:21.677 clat (msec): min=96, max=6826, avg=2569.37, stdev=2303.45 00:22:21.677 lat (msec): min=622, max=6828, avg=2597.75, stdev=2307.00 00:22:21.677 clat percentiles (msec): 00:22:21.677 | 1.00th=[ 625], 5.00th=[ 659], 10.00th=[ 676], 20.00th=[ 701], 00:22:21.677 | 30.00th=[ 726], 40.00th=[ 760], 50.00th=[ 793], 60.00th=[ 2903], 00:22:21.677 | 70.00th=[ 4010], 80.00th=[ 4732], 90.00th=[ 6611], 95.00th=[ 6678], 00:22:21.677 | 99.00th=[ 6812], 99.50th=[ 6812], 99.90th=[ 6812], 99.95th=[ 6812], 00:22:21.677 | 99.99th=[ 6812] 00:22:21.677 bw ( KiB/s): min= 4096, max=212992, per=3.03%, avg=84524.43, stdev=81893.43, samples=7 00:22:21.677 iops : min= 4, max= 208, avg=82.43, stdev=80.08, samples=7 00:22:21.677 lat (msec) : 100=0.24%, 750=37.62%, 1000=16.19%, >=2000=45.95% 00:22:21.677 cpu : usr=0.00%, sys=0.95%, ctx=727, majf=0, minf=32769 00:22:21.677 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.8%, 32=7.6%, >=64=85.0% 00:22:21.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.677 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:22:21.677 issued rwts: total=420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.677 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.677 job4: (groupid=0, jobs=1): err= 0: pid=825688: Thu Jul 25 19:13:13 2024 00:22:21.677 read: IOPS=31, BW=31.3MiB/s (32.9MB/s)(379MiB/12097msec) 00:22:21.677 slat (usec): min=34, max=2095.2k, avg=31599.14, stdev=221794.31 00:22:21.677 clat (msec): min=119, max=7497, avg=2975.89, stdev=2899.60 00:22:21.677 lat (msec): min=375, max=7499, avg=3007.49, stdev=2901.50 00:22:21.677 clat percentiles (msec): 00:22:21.677 | 1.00th=[ 376], 5.00th=[ 380], 10.00th=[ 380], 20.00th=[ 388], 00:22:21.677 | 30.00th=[ 485], 40.00th=[ 659], 50.00th=[ 1083], 60.00th=[ 3239], 00:22:21.677 | 70.00th=[ 5336], 80.00th=[ 7282], 90.00th=[ 7416], 95.00th=[ 7416], 00:22:21.677 | 99.00th=[ 7483], 99.50th=[ 7483], 99.90th=[ 7483], 99.95th=[ 7483], 00:22:21.677 | 99.99th=[ 7483] 00:22:21.677 bw ( KiB/s): min=22528, max=223232, per=3.68%, avg=102732.60, stdev=97244.21, samples=5 00:22:21.677 iops : min= 22, max= 218, avg=100.20, stdev=94.82, samples=5 00:22:21.677 lat (msec) : 250=0.26%, 500=30.61%, 750=13.72%, 1000=2.90%, 2000=6.86% 00:22:21.677 lat (msec) : >=2000=45.65% 00:22:21.677 cpu : usr=0.00%, sys=0.76%, ctx=418, majf=0, minf=32769 00:22:21.677 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.2%, 32=8.4%, >=64=83.4% 00:22:21.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.677 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:22:21.677 issued rwts: total=379,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.677 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.677 job4: (groupid=0, jobs=1): err= 0: pid=825689: Thu Jul 25 19:13:13 2024 00:22:21.677 read: IOPS=26, BW=26.0MiB/s (27.3MB/s)(314MiB/12059msec) 00:22:21.677 slat (usec): min=34, max=2113.4k, avg=38085.11, stdev=230417.59 00:22:21.677 clat (msec): min=98, max=6281, avg=3134.22, stdev=1766.63 00:22:21.677 lat (msec): min=967, max=6282, avg=3172.31, stdev=1764.70 00:22:21.677 clat percentiles (msec): 00:22:21.677 | 1.00th=[ 969], 5.00th=[ 1003], 10.00th=[ 1099], 20.00th=[ 1200], 00:22:21.677 | 30.00th=[ 1217], 40.00th=[ 2123], 50.00th=[ 2970], 60.00th=[ 4396], 00:22:21.677 | 70.00th=[ 4665], 80.00th=[ 4799], 90.00th=[ 5201], 95.00th=[ 6141], 00:22:21.677 | 99.00th=[ 6275], 99.50th=[ 6275], 99.90th=[ 6275], 99.95th=[ 6275], 00:22:21.677 | 99.99th=[ 6275] 00:22:21.677 bw ( KiB/s): min=19348, max=102400, per=2.68%, avg=74730.40, stdev=33226.39, samples=5 00:22:21.677 iops : min= 18, max= 100, avg=72.80, stdev=32.82, samples=5 00:22:21.677 lat (msec) : 100=0.32%, 1000=4.46%, 2000=32.48%, >=2000=62.74% 00:22:21.677 cpu : usr=0.00%, sys=0.76%, ctx=385, majf=0, minf=32769 00:22:21.677 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.5%, 16=5.1%, 32=10.2%, >=64=79.9% 00:22:21.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.677 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:22:21.677 issued rwts: total=314,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.677 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.677 job4: (groupid=0, jobs=1): err= 0: pid=825690: Thu Jul 25 19:13:13 2024 00:22:21.677 read: IOPS=72, BW=72.7MiB/s (76.3MB/s)(876MiB/12044msec) 00:22:21.677 slat (usec): min=30, max=2085.1k, avg=11431.42, stdev=90152.20 00:22:21.677 clat (msec): min=362, max=5991, avg=1274.70, stdev=928.67 00:22:21.677 lat (msec): min=366, max=6011, avg=1286.13, stdev=943.12 00:22:21.677 clat percentiles (msec): 00:22:21.677 | 1.00th=[ 368], 5.00th=[ 380], 10.00th=[ 405], 20.00th=[ 456], 00:22:21.677 | 30.00th=[ 523], 40.00th=[ 600], 50.00th=[ 776], 60.00th=[ 1536], 00:22:21.677 | 70.00th=[ 1821], 80.00th=[ 2232], 90.00th=[ 2534], 95.00th=[ 2635], 00:22:21.677 | 99.00th=[ 4144], 99.50th=[ 4144], 99.90th=[ 6007], 99.95th=[ 6007], 00:22:21.677 | 99.99th=[ 6007] 00:22:21.677 bw ( KiB/s): min= 1992, max=318850, per=4.99%, avg=139387.09, stdev=98958.66, samples=11 00:22:21.677 iops : min= 1, max= 311, avg=136.00, stdev=96.70, samples=11 00:22:21.677 lat (msec) : 500=27.74%, 750=22.03%, 1000=3.31%, 2000=19.86%, >=2000=27.05% 00:22:21.677 cpu : usr=0.01%, sys=1.00%, ctx=1128, majf=0, minf=32769 00:22:21.677 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.7%, >=64=92.8% 00:22:21.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.677 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:21.677 issued rwts: total=876,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.677 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.677 job4: (groupid=0, jobs=1): err= 0: pid=825691: Thu Jul 25 19:13:13 2024 00:22:21.677 read: IOPS=20, BW=20.7MiB/s (21.7MB/s)(251MiB/12119msec) 00:22:21.677 slat (usec): min=94, max=2105.4k, avg=39887.55, stdev=231608.01 00:22:21.677 clat (msec): min=2105, max=10698, avg=5669.14, stdev=1411.58 00:22:21.677 lat (msec): min=2121, max=10739, avg=5709.03, stdev=1430.03 00:22:21.677 clat percentiles (msec): 00:22:21.677 | 1.00th=[ 2123], 5.00th=[ 3574], 10.00th=[ 3641], 20.00th=[ 4245], 00:22:21.677 | 30.00th=[ 5269], 40.00th=[ 5604], 50.00th=[ 5805], 60.00th=[ 6074], 00:22:21.677 | 70.00th=[ 6141], 80.00th=[ 6342], 90.00th=[ 8423], 95.00th=[ 8423], 00:22:21.677 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[10671], 99.95th=[10671], 00:22:21.677 | 99.99th=[10671] 00:22:21.677 bw ( KiB/s): min= 1822, max=73728, per=1.14%, avg=31715.75, stdev=26080.70, samples=8 00:22:21.677 iops : min= 1, max= 72, avg=30.87, stdev=25.60, samples=8 00:22:21.677 lat (msec) : >=2000=100.00% 00:22:21.677 cpu : usr=0.00%, sys=0.72%, ctx=512, majf=0, minf=32769 00:22:21.677 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.2%, 16=6.4%, 32=12.7%, >=64=74.9% 00:22:21.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.677 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:22:21.677 issued rwts: total=251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.677 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.677 job4: (groupid=0, jobs=1): err= 0: pid=825692: Thu Jul 25 19:13:13 2024 00:22:21.677 read: IOPS=58, BW=58.7MiB/s (61.6MB/s)(588MiB/10016msec) 00:22:21.677 slat (usec): min=32, max=2077.4k, avg=17004.25, stdev=154082.25 00:22:21.677 clat (msec): min=14, max=6097, avg=1244.11, stdev=1349.33 00:22:21.677 lat (msec): min=17, max=6098, avg=1261.12, stdev=1363.64 00:22:21.677 clat percentiles (msec): 00:22:21.677 | 1.00th=[ 50], 5.00th=[ 165], 10.00th=[ 401], 20.00th=[ 435], 00:22:21.677 | 30.00th=[ 464], 40.00th=[ 498], 50.00th=[ 584], 60.00th=[ 684], 00:22:21.677 | 70.00th=[ 760], 80.00th=[ 2467], 90.00th=[ 2601], 95.00th=[ 4799], 00:22:21.677 | 99.00th=[ 6007], 99.50th=[ 6074], 99.90th=[ 6074], 99.95th=[ 6074], 00:22:21.677 | 99.99th=[ 6074] 00:22:21.677 bw ( KiB/s): min=69216, max=286720, per=6.76%, avg=188742.40, stdev=96823.39, samples=5 00:22:21.677 iops : min= 67, max= 280, avg=184.20, stdev=94.74, samples=5 00:22:21.677 lat (msec) : 20=0.34%, 50=1.02%, 100=1.87%, 250=2.55%, 500=36.39% 00:22:21.677 lat (msec) : 750=27.72%, 1000=1.02%, >=2000=29.08% 00:22:21.677 cpu : usr=0.00%, sys=1.09%, ctx=451, majf=0, minf=32769 00:22:21.677 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.4%, >=64=89.3% 00:22:21.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.677 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:22:21.677 issued rwts: total=588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.677 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.677 job4: (groupid=0, jobs=1): err= 0: pid=825693: Thu Jul 25 19:13:13 2024 00:22:21.677 read: IOPS=83, BW=83.9MiB/s (87.9MB/s)(1011MiB/12056msec) 00:22:21.677 slat (usec): min=33, max=2101.1k, avg=9892.59, stdev=99310.34 00:22:21.677 clat (msec): min=305, max=4137, avg=1197.04, stdev=1077.18 00:22:21.677 lat (msec): min=310, max=4146, avg=1206.93, stdev=1081.80 00:22:21.677 clat percentiles (msec): 00:22:21.677 | 1.00th=[ 317], 5.00th=[ 347], 10.00th=[ 368], 20.00th=[ 376], 00:22:21.677 | 30.00th=[ 477], 40.00th=[ 550], 50.00th=[ 651], 60.00th=[ 751], 00:22:21.677 | 70.00th=[ 835], 80.00th=[ 2500], 90.00th=[ 2836], 95.00th=[ 2903], 00:22:21.677 | 99.00th=[ 4111], 99.50th=[ 4111], 99.90th=[ 4144], 99.95th=[ 4144], 00:22:21.677 | 99.99th=[ 4144] 00:22:21.677 bw ( KiB/s): min=53248, max=364544, per=7.21%, avg=201159.11, stdev=106755.51, samples=9 00:22:21.678 iops : min= 52, max= 356, avg=196.44, stdev=104.25, samples=9 00:22:21.678 lat (msec) : 500=34.42%, 750=24.93%, 1000=10.98%, >=2000=29.67% 00:22:21.678 cpu : usr=0.02%, sys=1.19%, ctx=1267, majf=0, minf=32769 00:22:21.678 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.8% 00:22:21.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.678 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:21.678 issued rwts: total=1011,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.678 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.678 job4: (groupid=0, jobs=1): err= 0: pid=825694: Thu Jul 25 19:13:13 2024 00:22:21.678 read: IOPS=54, BW=54.5MiB/s (57.2MB/s)(661MiB/12121msec) 00:22:21.678 slat (usec): min=31, max=2215.6k, avg=15186.04, stdev=128082.55 00:22:21.678 clat (msec): min=397, max=4222, avg=1798.24, stdev=1375.78 00:22:21.678 lat (msec): min=398, max=4231, avg=1813.42, stdev=1381.89 00:22:21.678 clat percentiles (msec): 00:22:21.678 | 1.00th=[ 401], 5.00th=[ 401], 10.00th=[ 422], 20.00th=[ 464], 00:22:21.678 | 30.00th=[ 575], 40.00th=[ 651], 50.00th=[ 751], 60.00th=[ 2635], 00:22:21.678 | 70.00th=[ 2970], 80.00th=[ 3306], 90.00th=[ 3675], 95.00th=[ 4044], 00:22:21.678 | 99.00th=[ 4178], 99.50th=[ 4212], 99.90th=[ 4212], 99.95th=[ 4212], 00:22:21.678 | 99.99th=[ 4212] 00:22:21.678 bw ( KiB/s): min= 1822, max=315392, per=3.92%, avg=109289.50, stdev=99880.21, samples=10 00:22:21.678 iops : min= 1, max= 308, avg=106.60, stdev=97.55, samples=10 00:22:21.678 lat (msec) : 500=24.96%, 750=25.11%, 1000=1.06%, 2000=4.24%, >=2000=44.63% 00:22:21.678 cpu : usr=0.01%, sys=1.17%, ctx=793, majf=0, minf=32769 00:22:21.678 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.5% 00:22:21.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.678 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:22:21.678 issued rwts: total=661,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.678 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.678 job4: (groupid=0, jobs=1): err= 0: pid=825695: Thu Jul 25 19:13:13 2024 00:22:21.678 read: IOPS=101, BW=101MiB/s (106MB/s)(1219MiB/12064msec) 00:22:21.678 slat (usec): min=36, max=2056.9k, avg=9815.69, stdev=106897.51 00:22:21.678 clat (msec): min=91, max=3831, avg=916.01, stdev=1097.29 00:22:21.678 lat (msec): min=246, max=3840, avg=925.82, stdev=1103.85 00:22:21.678 clat percentiles (msec): 00:22:21.678 | 1.00th=[ 247], 5.00th=[ 249], 10.00th=[ 249], 20.00th=[ 251], 00:22:21.678 | 30.00th=[ 253], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 313], 00:22:21.678 | 70.00th=[ 709], 80.00th=[ 2467], 90.00th=[ 2769], 95.00th=[ 3373], 00:22:21.678 | 99.00th=[ 3641], 99.50th=[ 3641], 99.90th=[ 3842], 99.95th=[ 3842], 00:22:21.678 | 99.99th=[ 3842] 00:22:21.678 bw ( KiB/s): min=23744, max=518144, per=7.97%, avg=222534.40, stdev=207601.19, samples=10 00:22:21.678 iops : min= 23, max= 506, avg=217.30, stdev=202.76, samples=10 00:22:21.678 lat (msec) : 100=0.08%, 250=15.18%, 500=51.44%, 750=6.23%, 1000=2.38% 00:22:21.678 lat (msec) : 2000=2.46%, >=2000=22.23% 00:22:21.678 cpu : usr=0.02%, sys=1.43%, ctx=1148, majf=0, minf=32769 00:22:21.678 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.6%, >=64=94.8% 00:22:21.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.678 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:21.678 issued rwts: total=1219,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.678 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.678 job5: (groupid=0, jobs=1): err= 0: pid=825696: Thu Jul 25 19:13:13 2024 00:22:21.678 read: IOPS=95, BW=95.5MiB/s (100MB/s)(966MiB/10114msec) 00:22:21.678 slat (usec): min=47, max=2090.7k, avg=10423.40, stdev=114920.52 00:22:21.678 clat (msec): min=39, max=6880, avg=1293.82, stdev=2064.78 00:22:21.678 lat (msec): min=115, max=6882, avg=1304.25, stdev=2070.99 00:22:21.678 clat percentiles (msec): 00:22:21.678 | 1.00th=[ 321], 5.00th=[ 351], 10.00th=[ 359], 20.00th=[ 393], 00:22:21.678 | 30.00th=[ 401], 40.00th=[ 414], 50.00th=[ 426], 60.00th=[ 535], 00:22:21.678 | 70.00th=[ 600], 80.00th=[ 735], 90.00th=[ 6678], 95.00th=[ 6745], 00:22:21.678 | 99.00th=[ 6879], 99.50th=[ 6879], 99.90th=[ 6879], 99.95th=[ 6879], 00:22:21.678 | 99.99th=[ 6879] 00:22:21.678 bw ( KiB/s): min= 2048, max=372736, per=6.15%, avg=171622.40, stdev=154704.38, samples=10 00:22:21.678 iops : min= 2, max= 364, avg=167.60, stdev=151.08, samples=10 00:22:21.678 lat (msec) : 50=0.10%, 250=0.31%, 500=55.38%, 750=25.47%, 1000=4.35% 00:22:21.678 lat (msec) : >=2000=14.39% 00:22:21.678 cpu : usr=0.08%, sys=1.77%, ctx=830, majf=0, minf=32769 00:22:21.678 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.3%, >=64=93.5% 00:22:21.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.678 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:21.678 issued rwts: total=966,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.678 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.678 job5: (groupid=0, jobs=1): err= 0: pid=825697: Thu Jul 25 19:13:13 2024 00:22:21.678 read: IOPS=13, BW=13.3MiB/s (14.0MB/s)(134MiB/10042msec) 00:22:21.678 slat (usec): min=132, max=2097.4k, avg=74747.79, stdev=355436.70 00:22:21.678 clat (msec): min=24, max=10030, avg=6164.84, stdev=3180.34 00:22:21.678 lat (msec): min=42, max=10032, avg=6239.58, stdev=3152.52 00:22:21.678 clat percentiles (msec): 00:22:21.678 | 1.00th=[ 43], 5.00th=[ 48], 10.00th=[ 60], 20.00th=[ 4396], 00:22:21.678 | 30.00th=[ 6409], 40.00th=[ 6477], 50.00th=[ 6544], 60.00th=[ 6611], 00:22:21.678 | 70.00th=[ 6678], 80.00th=[ 9866], 90.00th=[10000], 95.00th=[10000], 00:22:21.678 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:22:21.678 | 99.99th=[10000] 00:22:21.678 bw ( KiB/s): min=14054, max=14054, per=0.50%, avg=14054.00, stdev= 0.00, samples=1 00:22:21.678 iops : min= 13, max= 13, avg=13.00, stdev= 0.00, samples=1 00:22:21.678 lat (msec) : 50=6.72%, 100=4.48%, 250=4.48%, >=2000=84.33% 00:22:21.678 cpu : usr=0.02%, sys=0.94%, ctx=140, majf=0, minf=32769 00:22:21.678 IO depths : 1=0.7%, 2=1.5%, 4=3.0%, 8=6.0%, 16=11.9%, 32=23.9%, >=64=53.0% 00:22:21.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.678 complete : 0=0.0%, 4=87.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=12.5% 00:22:21.678 issued rwts: total=134,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.678 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.678 job5: (groupid=0, jobs=1): err= 0: pid=825698: Thu Jul 25 19:13:13 2024 00:22:21.678 read: IOPS=80, BW=80.4MiB/s (84.4MB/s)(810MiB/10069msec) 00:22:21.678 slat (usec): min=32, max=2095.2k, avg=12387.91, stdev=85227.46 00:22:21.678 clat (msec): min=30, max=4200, avg=1283.71, stdev=1023.09 00:22:21.678 lat (msec): min=130, max=4220, avg=1296.10, stdev=1027.63 00:22:21.678 clat percentiles (msec): 00:22:21.678 | 1.00th=[ 184], 5.00th=[ 232], 10.00th=[ 296], 20.00th=[ 338], 00:22:21.678 | 30.00th=[ 451], 40.00th=[ 919], 50.00th=[ 1083], 60.00th=[ 1200], 00:22:21.678 | 70.00th=[ 1469], 80.00th=[ 2005], 90.00th=[ 2937], 95.00th=[ 3574], 00:22:21.678 | 99.00th=[ 4111], 99.50th=[ 4144], 99.90th=[ 4212], 99.95th=[ 4212], 00:22:21.678 | 99.99th=[ 4212] 00:22:21.678 bw ( KiB/s): min=14336, max=398562, per=3.85%, avg=107379.85, stdev=96132.99, samples=13 00:22:21.678 iops : min= 14, max= 389, avg=104.85, stdev=93.82, samples=13 00:22:21.678 lat (msec) : 50=0.12%, 250=7.04%, 500=23.95%, 750=4.44%, 1000=7.53% 00:22:21.678 lat (msec) : 2000=36.67%, >=2000=20.25% 00:22:21.678 cpu : usr=0.02%, sys=1.22%, ctx=1460, majf=0, minf=32769 00:22:21.678 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2% 00:22:21.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.678 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:21.678 issued rwts: total=810,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.678 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.678 job5: (groupid=0, jobs=1): err= 0: pid=825699: Thu Jul 25 19:13:13 2024 00:22:21.678 read: IOPS=11, BW=11.7MiB/s (12.2MB/s)(118MiB/10111msec) 00:22:21.678 slat (usec): min=494, max=2099.3k, avg=85345.38, stdev=383869.28 00:22:21.678 clat (msec): min=39, max=10108, avg=7438.49, stdev=2757.70 00:22:21.678 lat (msec): min=131, max=10110, avg=7523.84, stdev=2681.53 00:22:21.678 clat percentiles (msec): 00:22:21.678 | 1.00th=[ 131], 5.00th=[ 155], 10.00th=[ 4329], 20.00th=[ 6477], 00:22:21.678 | 30.00th=[ 6477], 40.00th=[ 6611], 50.00th=[ 6611], 60.00th=[ 8792], 00:22:21.678 | 70.00th=[10000], 80.00th=[10134], 90.00th=[10134], 95.00th=[10134], 00:22:21.678 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:22:21.678 | 99.99th=[10134] 00:22:21.678 lat (msec) : 50=0.85%, 250=5.08%, >=2000=94.07% 00:22:21.678 cpu : usr=0.00%, sys=0.92%, ctx=119, majf=0, minf=30209 00:22:21.678 IO depths : 1=0.8%, 2=1.7%, 4=3.4%, 8=6.8%, 16=13.6%, 32=27.1%, >=64=46.6% 00:22:21.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.678 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:22:21.678 issued rwts: total=118,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.678 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.678 job5: (groupid=0, jobs=1): err= 0: pid=825700: Thu Jul 25 19:13:13 2024 00:22:21.678 read: IOPS=154, BW=154MiB/s (162MB/s)(1850MiB/12001msec) 00:22:21.678 slat (usec): min=34, max=2047.0k, avg=5409.28, stdev=48574.76 00:22:21.678 clat (msec): min=250, max=2626, avg=802.58, stdev=712.69 00:22:21.678 lat (msec): min=251, max=2628, avg=807.99, stdev=714.31 00:22:21.678 clat percentiles (msec): 00:22:21.678 | 1.00th=[ 253], 5.00th=[ 255], 10.00th=[ 275], 20.00th=[ 376], 00:22:21.678 | 30.00th=[ 384], 40.00th=[ 401], 50.00th=[ 439], 60.00th=[ 542], 00:22:21.678 | 70.00th=[ 785], 80.00th=[ 1150], 90.00th=[ 2299], 95.00th=[ 2601], 00:22:21.678 | 99.00th=[ 2601], 99.50th=[ 2601], 99.90th=[ 2635], 99.95th=[ 2635], 00:22:21.678 | 99.99th=[ 2635] 00:22:21.678 bw ( KiB/s): min=30720, max=489472, per=7.90%, avg=220525.38, stdev=120566.54, samples=16 00:22:21.678 iops : min= 30, max= 478, avg=215.31, stdev=117.72, samples=16 00:22:21.678 lat (msec) : 500=54.92%, 750=12.92%, 1000=10.54%, 2000=7.95%, >=2000=13.68% 00:22:21.678 cpu : usr=0.04%, sys=1.61%, ctx=1747, majf=0, minf=32770 00:22:21.678 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:22:21.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.678 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:21.678 issued rwts: total=1850,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.678 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.678 job5: (groupid=0, jobs=1): err= 0: pid=825701: Thu Jul 25 19:13:13 2024 00:22:21.678 read: IOPS=191, BW=191MiB/s (200MB/s)(2310MiB/12092msec) 00:22:21.679 slat (usec): min=31, max=2164.5k, avg=5175.59, stdev=78358.95 00:22:21.679 clat (msec): min=128, max=3700, avg=503.44, stdev=817.35 00:22:21.679 lat (msec): min=128, max=3702, avg=508.62, stdev=822.90 00:22:21.679 clat percentiles (msec): 00:22:21.679 | 1.00th=[ 129], 5.00th=[ 130], 10.00th=[ 131], 20.00th=[ 132], 00:22:21.679 | 30.00th=[ 132], 40.00th=[ 133], 50.00th=[ 134], 60.00th=[ 134], 00:22:21.679 | 70.00th=[ 136], 80.00th=[ 617], 90.00th=[ 2366], 95.00th=[ 2635], 00:22:21.679 | 99.00th=[ 2869], 99.50th=[ 3641], 99.90th=[ 3708], 99.95th=[ 3708], 00:22:21.679 | 99.99th=[ 3708] 00:22:21.679 bw ( KiB/s): min=55296, max=993280, per=16.01%, avg=446873.60, stdev=398557.85, samples=10 00:22:21.679 iops : min= 54, max= 970, avg=436.40, stdev=389.22, samples=10 00:22:21.679 lat (msec) : 250=74.46%, 500=0.56%, 750=12.68%, 1000=0.09%, >=2000=12.21% 00:22:21.679 cpu : usr=0.05%, sys=1.76%, ctx=1809, majf=0, minf=32769 00:22:21.679 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:22:21.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:21.679 issued rwts: total=2310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.679 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.679 job5: (groupid=0, jobs=1): err= 0: pid=825702: Thu Jul 25 19:13:13 2024 00:22:21.679 read: IOPS=77, BW=77.1MiB/s (80.8MB/s)(775MiB/10056msec) 00:22:21.679 slat (usec): min=32, max=2154.8k, avg=12937.17, stdev=109144.63 00:22:21.679 clat (msec): min=27, max=5957, avg=1318.82, stdev=1461.02 00:22:21.679 lat (msec): min=115, max=5966, avg=1331.75, stdev=1470.05 00:22:21.679 clat percentiles (msec): 00:22:21.679 | 1.00th=[ 138], 5.00th=[ 305], 10.00th=[ 527], 20.00th=[ 617], 00:22:21.679 | 30.00th=[ 684], 40.00th=[ 726], 50.00th=[ 818], 60.00th=[ 860], 00:22:21.679 | 70.00th=[ 936], 80.00th=[ 978], 90.00th=[ 4732], 95.00th=[ 4866], 00:22:21.679 | 99.00th=[ 5873], 99.50th=[ 5873], 99.90th=[ 5940], 99.95th=[ 5940], 00:22:21.679 | 99.99th=[ 5940] 00:22:21.679 bw ( KiB/s): min=22528, max=198656, per=5.28%, avg=147228.44, stdev=52288.66, samples=9 00:22:21.679 iops : min= 22, max= 194, avg=143.78, stdev=51.06, samples=9 00:22:21.679 lat (msec) : 50=0.13%, 250=3.74%, 500=5.94%, 750=34.58%, 1000=38.84% 00:22:21.679 lat (msec) : 2000=2.58%, >=2000=14.19% 00:22:21.679 cpu : usr=0.01%, sys=1.02%, ctx=933, majf=0, minf=32769 00:22:21.679 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.1%, >=64=91.9% 00:22:21.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.679 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:22:21.679 issued rwts: total=775,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.679 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.679 job5: (groupid=0, jobs=1): err= 0: pid=825703: Thu Jul 25 19:13:13 2024 00:22:21.679 read: IOPS=56, BW=56.2MiB/s (59.0MB/s)(678MiB/12058msec) 00:22:21.679 slat (usec): min=34, max=2038.8k, avg=14751.76, stdev=106700.83 00:22:21.679 clat (msec): min=255, max=4926, avg=1992.43, stdev=1489.18 00:22:21.679 lat (msec): min=258, max=4936, avg=2007.18, stdev=1490.69 00:22:21.679 clat percentiles (msec): 00:22:21.679 | 1.00th=[ 268], 5.00th=[ 347], 10.00th=[ 558], 20.00th=[ 869], 00:22:21.679 | 30.00th=[ 911], 40.00th=[ 1011], 50.00th=[ 1150], 60.00th=[ 1401], 00:22:21.679 | 70.00th=[ 2970], 80.00th=[ 3205], 90.00th=[ 4597], 95.00th=[ 4799], 00:22:21.679 | 99.00th=[ 4933], 99.50th=[ 4933], 99.90th=[ 4933], 99.95th=[ 4933], 00:22:21.679 | 99.99th=[ 4933] 00:22:21.679 bw ( KiB/s): min= 2048, max=190464, per=3.37%, avg=94037.33, stdev=54901.25, samples=12 00:22:21.679 iops : min= 2, max= 186, avg=91.83, stdev=53.61, samples=12 00:22:21.679 lat (msec) : 500=8.70%, 750=4.57%, 1000=26.40%, 2000=22.71%, >=2000=37.61% 00:22:21.679 cpu : usr=0.03%, sys=0.82%, ctx=1351, majf=0, minf=32769 00:22:21.679 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.7%, >=64=90.7% 00:22:21.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.679 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:22:21.679 issued rwts: total=678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.679 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.679 job5: (groupid=0, jobs=1): err= 0: pid=825704: Thu Jul 25 19:13:13 2024 00:22:21.679 read: IOPS=86, BW=86.3MiB/s (90.4MB/s)(868MiB/10063msec) 00:22:21.679 slat (usec): min=37, max=2036.7k, avg=11535.49, stdev=74777.58 00:22:21.679 clat (msec): min=45, max=2970, avg=1383.38, stdev=716.66 00:22:21.679 lat (msec): min=139, max=3009, avg=1394.92, stdev=716.45 00:22:21.679 clat percentiles (msec): 00:22:21.679 | 1.00th=[ 155], 5.00th=[ 642], 10.00th=[ 676], 20.00th=[ 751], 00:22:21.679 | 30.00th=[ 852], 40.00th=[ 969], 50.00th=[ 1053], 60.00th=[ 1401], 00:22:21.679 | 70.00th=[ 1804], 80.00th=[ 2072], 90.00th=[ 2534], 95.00th=[ 2869], 00:22:21.679 | 99.00th=[ 2970], 99.50th=[ 2970], 99.90th=[ 2970], 99.95th=[ 2970], 00:22:21.679 | 99.99th=[ 2970] 00:22:21.679 bw ( KiB/s): min=22528, max=190464, per=3.88%, avg=108230.93, stdev=56129.71, samples=14 00:22:21.679 iops : min= 22, max= 186, avg=105.64, stdev=54.78, samples=14 00:22:21.679 lat (msec) : 50=0.12%, 250=1.73%, 750=17.97%, 1000=23.04%, 2000=33.06% 00:22:21.679 lat (msec) : >=2000=24.08% 00:22:21.679 cpu : usr=0.01%, sys=1.37%, ctx=1336, majf=0, minf=32769 00:22:21.679 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.7%, >=64=92.7% 00:22:21.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.679 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:21.679 issued rwts: total=868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.679 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.679 job5: (groupid=0, jobs=1): err= 0: pid=825705: Thu Jul 25 19:13:13 2024 00:22:21.679 read: IOPS=48, BW=48.7MiB/s (51.0MB/s)(491MiB/10086msec) 00:22:21.679 slat (usec): min=40, max=2095.0k, avg=20472.75, stdev=108827.22 00:22:21.679 clat (msec): min=30, max=4373, avg=2143.08, stdev=776.38 00:22:21.679 lat (msec): min=126, max=4387, avg=2163.55, stdev=774.05 00:22:21.679 clat percentiles (msec): 00:22:21.679 | 1.00th=[ 142], 5.00th=[ 1267], 10.00th=[ 1368], 20.00th=[ 1620], 00:22:21.679 | 30.00th=[ 1754], 40.00th=[ 1821], 50.00th=[ 1888], 60.00th=[ 2165], 00:22:21.679 | 70.00th=[ 2232], 80.00th=[ 2735], 90.00th=[ 3339], 95.00th=[ 3876], 00:22:21.679 | 99.00th=[ 4279], 99.50th=[ 4329], 99.90th=[ 4396], 99.95th=[ 4396], 00:22:21.679 | 99.99th=[ 4396] 00:22:21.679 bw ( KiB/s): min=14336, max=114688, per=2.22%, avg=61964.92, stdev=30330.31, samples=12 00:22:21.679 iops : min= 14, max= 112, avg=60.42, stdev=29.66, samples=12 00:22:21.679 lat (msec) : 50=0.20%, 250=1.43%, 2000=54.79%, >=2000=43.58% 00:22:21.679 cpu : usr=0.02%, sys=1.07%, ctx=1419, majf=0, minf=32769 00:22:21.679 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.3%, 32=6.5%, >=64=87.2% 00:22:21.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.679 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:22:21.679 issued rwts: total=491,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.679 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.679 job5: (groupid=0, jobs=1): err= 0: pid=825706: Thu Jul 25 19:13:13 2024 00:22:21.679 read: IOPS=96, BW=96.1MiB/s (101MB/s)(962MiB/10015msec) 00:22:21.679 slat (usec): min=33, max=2064.3k, avg=10390.46, stdev=99575.89 00:22:21.679 clat (msec): min=13, max=3010, avg=1134.19, stdev=968.34 00:22:21.679 lat (msec): min=15, max=3012, avg=1144.58, stdev=970.27 00:22:21.679 clat percentiles (msec): 00:22:21.679 | 1.00th=[ 32], 5.00th=[ 418], 10.00th=[ 435], 20.00th=[ 456], 00:22:21.679 | 30.00th=[ 481], 40.00th=[ 550], 50.00th=[ 625], 60.00th=[ 735], 00:22:21.679 | 70.00th=[ 793], 80.00th=[ 2601], 90.00th=[ 2802], 95.00th=[ 2903], 00:22:21.679 | 99.00th=[ 2970], 99.50th=[ 3004], 99.90th=[ 3004], 99.95th=[ 3004], 00:22:21.679 | 99.99th=[ 3004] 00:22:21.679 bw ( KiB/s): min=53248, max=279086, per=6.13%, avg=171012.20, stdev=69363.20, samples=10 00:22:21.679 iops : min= 52, max= 272, avg=166.90, stdev=67.57, samples=10 00:22:21.679 lat (msec) : 20=0.31%, 50=1.35%, 250=1.04%, 500=30.87%, 750=27.96% 00:22:21.679 lat (msec) : 1000=10.91%, 2000=1.14%, >=2000=26.40% 00:22:21.679 cpu : usr=0.04%, sys=1.45%, ctx=1064, majf=0, minf=32769 00:22:21.679 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.3%, >=64=93.5% 00:22:21.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.679 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:21.679 issued rwts: total=962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.679 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.679 job5: (groupid=0, jobs=1): err= 0: pid=825707: Thu Jul 25 19:13:13 2024 00:22:21.679 read: IOPS=122, BW=123MiB/s (128MB/s)(1227MiB/10013msec) 00:22:21.679 slat (usec): min=30, max=2133.9k, avg=8147.09, stdev=90810.04 00:22:21.679 clat (msec): min=11, max=3204, avg=888.62, stdev=1012.81 00:22:21.679 lat (msec): min=13, max=3208, avg=896.76, stdev=1016.51 00:22:21.679 clat percentiles (msec): 00:22:21.679 | 1.00th=[ 39], 5.00th=[ 121], 10.00th=[ 128], 20.00th=[ 150], 00:22:21.679 | 30.00th=[ 222], 40.00th=[ 292], 50.00th=[ 388], 60.00th=[ 468], 00:22:21.680 | 70.00th=[ 902], 80.00th=[ 2299], 90.00th=[ 2836], 95.00th=[ 3004], 00:22:21.680 | 99.00th=[ 3104], 99.50th=[ 3138], 99.90th=[ 3205], 99.95th=[ 3205], 00:22:21.680 | 99.99th=[ 3205] 00:22:21.680 bw ( KiB/s): min=36864, max=475136, per=8.08%, avg=225429.70, stdev=162747.19, samples=10 00:22:21.680 iops : min= 36, max= 464, avg=220.00, stdev=158.77, samples=10 00:22:21.680 lat (msec) : 20=0.41%, 50=0.65%, 250=32.52%, 500=27.38%, 750=4.07% 00:22:21.680 lat (msec) : 1000=9.54%, 2000=4.73%, >=2000=20.70% 00:22:21.680 cpu : usr=0.03%, sys=1.43%, ctx=1421, majf=0, minf=32769 00:22:21.680 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.6%, >=64=94.9% 00:22:21.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.680 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:21.680 issued rwts: total=1227,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.680 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.680 job5: (groupid=0, jobs=1): err= 0: pid=825708: Thu Jul 25 19:13:13 2024 00:22:21.680 read: IOPS=115, BW=116MiB/s (122MB/s)(1175MiB/10131msec) 00:22:21.680 slat (usec): min=31, max=2154.7k, avg=8511.81, stdev=93115.44 00:22:21.680 clat (msec): min=112, max=4273, avg=947.48, stdev=1247.88 00:22:21.680 lat (msec): min=112, max=4283, avg=955.99, stdev=1254.27 00:22:21.680 clat percentiles (msec): 00:22:21.680 | 1.00th=[ 118], 5.00th=[ 126], 10.00th=[ 127], 20.00th=[ 129], 00:22:21.680 | 30.00th=[ 134], 40.00th=[ 171], 50.00th=[ 334], 60.00th=[ 384], 00:22:21.680 | 70.00th=[ 567], 80.00th=[ 2232], 90.00th=[ 3306], 95.00th=[ 3641], 00:22:21.680 | 99.00th=[ 4077], 99.50th=[ 4178], 99.90th=[ 4212], 99.95th=[ 4279], 00:22:21.680 | 99.99th=[ 4279] 00:22:21.680 bw ( KiB/s): min= 2048, max=946176, per=7.69%, avg=214630.40, stdev=305312.77, samples=10 00:22:21.680 iops : min= 2, max= 924, avg=209.60, stdev=298.16, samples=10 00:22:21.680 lat (msec) : 250=43.32%, 500=23.32%, 750=7.91%, 1000=0.17%, 2000=1.79% 00:22:21.680 lat (msec) : >=2000=23.49% 00:22:21.680 cpu : usr=0.04%, sys=1.20%, ctx=1549, majf=0, minf=32769 00:22:21.680 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.7%, >=64=94.6% 00:22:21.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.680 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:21.680 issued rwts: total=1175,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.680 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.680 00:22:21.680 Run status group 0 (all jobs): 00:22:21.680 READ: bw=2725MiB/s (2858MB/s), 290KiB/s-196MiB/s (297kB/s-206MB/s), io=37.9GiB (40.7GB), run=10013-14230msec 00:22:21.680 00:22:21.680 Disk stats (read/write): 00:22:21.680 nvme0n1: ios=26221/0, merge=0/0, ticks=8751139/0, in_queue=8751139, util=99.01% 00:22:21.680 nvme1n1: ios=18352/0, merge=0/0, ticks=10606605/0, in_queue=10606605, util=99.15% 00:22:21.680 nvme2n1: ios=26813/0, merge=0/0, ticks=10406612/0, in_queue=10406612, util=99.13% 00:22:21.680 nvme3n1: ios=65070/0, merge=0/0, ticks=11111150/0, in_queue=11111150, util=98.93% 00:22:21.680 nvme4n1: ios=73145/0, merge=0/0, ticks=10575473/0, in_queue=10575473, util=99.20% 00:22:21.680 nvme5n1: ios=98896/0, merge=0/0, ticks=9734541/0, in_queue=9734541, util=99.24% 00:22:21.680 19:13:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:22:21.680 19:13:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:22:21.680 19:13:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:22:21.680 19:13:14 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:22:24.453 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.453 19:13:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:22:24.453 19:13:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:22:24.453 19:13:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:22:24.453 19:13:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000000 00:22:24.453 19:13:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:22:24.453 19:13:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000000 00:22:24.453 19:13:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:22:24.453 19:13:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:24.453 19:13:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.453 19:13:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:24.453 19:13:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.453 19:13:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:22:24.453 19:13:16 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:26.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:26.414 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:22:26.414 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:22:26.414 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:22:26.414 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000001 00:22:26.414 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:22:26.414 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000001 00:22:26.414 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:22:26.414 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:26.414 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.414 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:26.414 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.414 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:22:26.414 19:13:18 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:22:28.995 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:22:28.995 19:13:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:22:28.995 19:13:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:22:28.995 19:13:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:22:28.995 19:13:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000002 00:22:28.995 19:13:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:22:28.995 19:13:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000002 00:22:28.995 19:13:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:22:28.995 19:13:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:28.995 19:13:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.995 19:13:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:28.995 19:13:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.995 19:13:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:22:28.995 19:13:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:22:31.529 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:22:31.529 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:22:31.529 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:22:31.529 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:22:31.529 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000003 00:22:31.529 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:22:31.529 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000003 00:22:31.529 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:22:31.529 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:31.529 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.529 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:31.529 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.529 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:22:31.529 19:13:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:22:33.434 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:22:33.434 19:13:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:22:33.434 19:13:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:22:33.434 19:13:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:22:33.435 19:13:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000004 00:22:33.435 19:13:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:22:33.435 19:13:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000004 00:22:33.435 19:13:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:22:33.435 19:13:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:22:33.435 19:13:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.435 19:13:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:33.435 19:13:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.435 19:13:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:22:33.435 19:13:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:22:35.968 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:22:35.968 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:22:35.968 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:22:35.968 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:22:35.968 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000005 00:22:35.968 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:22:35.968 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000005 00:22:35.968 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:22:35.968 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:22:35.968 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.968 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:35.968 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.968 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:22:35.968 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:22:35.968 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:35.968 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # sync 00:22:35.968 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:35.968 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:35.968 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@120 -- # set +e 00:22:35.969 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:35.969 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:35.969 rmmod nvme_rdma 00:22:35.969 rmmod nvme_fabrics 00:22:35.969 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:35.969 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set -e 00:22:35.969 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # return 0 00:22:35.969 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@489 -- # '[' -n 822013 ']' 00:22:35.969 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@490 -- # killprocess 822013 00:22:35.969 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@950 -- # '[' -z 822013 ']' 00:22:35.969 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # kill -0 822013 00:22:35.969 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@955 -- # uname 00:22:35.969 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:35.969 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 822013 00:22:35.969 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:35.969 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:35.969 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@968 -- # echo 'killing process with pid 822013' 00:22:35.969 killing process with pid 822013 00:22:35.969 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@969 -- # kill 822013 00:22:35.969 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@974 -- # wait 822013 00:22:36.228 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:36.228 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:36.228 00:22:36.228 real 0m55.615s 00:22:36.228 user 3m24.723s 00:22:36.228 sys 0m14.774s 00:22:36.228 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:36.228 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:22:36.228 ************************************ 00:22:36.228 END TEST nvmf_srq_overwhelm 00:22:36.228 ************************************ 00:22:36.228 19:13:28 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:22:36.228 19:13:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:36.228 19:13:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:36.228 19:13:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:36.228 ************************************ 00:22:36.228 START TEST nvmf_shutdown 00:22:36.228 ************************************ 00:22:36.228 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:22:36.228 * Looking for test storage... 00:22:36.228 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:22:36.228 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:36.228 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:36.228 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:36.228 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:36.228 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:36.228 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:36.228 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:36.228 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:36.228 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:36.228 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:36.228 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:36.228 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:36.228 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:22:36.228 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:22:36.228 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:36.228 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:36.228 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:36.228 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:36.228 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:36.228 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:36.228 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:36.228 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:36.228 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.229 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.229 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.229 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:36.229 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.229 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:22:36.229 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:36.229 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:36.229 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:36.229 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:36.229 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:36.229 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:36.229 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:36.229 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:36.488 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:36.488 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:36.488 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:36.488 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:36.488 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:36.488 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:36.488 ************************************ 00:22:36.488 START TEST nvmf_shutdown_tc1 00:22:36.488 ************************************ 00:22:36.488 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:22:36.488 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:22:36.488 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:36.488 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:36.488 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:36.488 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:36.488 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:36.488 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:36.488 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.488 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:36.488 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.488 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:36.488 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:36.488 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:36.488 19:13:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:22:43.059 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:22:43.059 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:43.059 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:22:43.060 Found net devices under 0000:af:00.0: mlx_0_0 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:22:43.060 Found net devices under 0000:af:00.1: mlx_0_1 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # rdma_device_init 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # uname 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:43.060 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:43.060 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:22:43.060 altname enp175s0f0np0 00:22:43.060 altname ens801f0np0 00:22:43.060 inet 192.168.100.8/24 scope global mlx_0_0 00:22:43.060 valid_lft forever preferred_lft forever 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:43.060 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:43.060 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:22:43.060 altname enp175s0f1np1 00:22:43.060 altname ens801f1np1 00:22:43.060 inet 192.168.100.9/24 scope global mlx_0_1 00:22:43.060 valid_lft forever preferred_lft forever 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:43.060 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:43.061 192.168.100.9' 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:43.061 192.168.100.9' 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # head -n 1 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:43.061 192.168.100.9' 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # tail -n +2 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # head -n 1 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=833542 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 833542 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 833542 ']' 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:43.061 19:13:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:43.061 [2024-07-25 19:13:34.645439] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:43.061 [2024-07-25 19:13:34.645490] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.061 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.061 [2024-07-25 19:13:34.715344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:43.061 [2024-07-25 19:13:34.789809] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.061 [2024-07-25 19:13:34.789852] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.061 [2024-07-25 19:13:34.789859] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.061 [2024-07-25 19:13:34.789865] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.061 [2024-07-25 19:13:34.789870] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.061 [2024-07-25 19:13:34.789937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.061 [2024-07-25 19:13:34.790047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:43.061 [2024-07-25 19:13:34.790153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.061 [2024-07-25 19:13:34.790153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:43.061 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:43.061 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:43.061 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:43.061 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:43.061 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:43.061 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:43.061 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:43.061 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.061 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:43.319 [2024-07-25 19:13:35.554795] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf070f0/0xf0b5e0) succeed. 00:22:43.319 [2024-07-25 19:13:35.564204] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf08730/0xf4cc80) succeed. 00:22:43.319 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.319 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:43.319 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:43.319 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:43.319 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:43.320 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:43.320 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:43.320 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:43.320 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:43.320 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:43.320 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:43.320 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:43.320 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:43.320 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:43.320 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:43.320 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:43.320 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:43.320 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:43.320 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:43.320 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:43.320 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:43.320 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:43.320 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:43.320 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:43.320 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:43.320 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:43.320 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:43.320 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.320 19:13:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:43.320 Malloc1 00:22:43.320 [2024-07-25 19:13:35.776091] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:43.578 Malloc2 00:22:43.578 Malloc3 00:22:43.578 Malloc4 00:22:43.578 Malloc5 00:22:43.578 Malloc6 00:22:43.578 Malloc7 00:22:43.836 Malloc8 00:22:43.836 Malloc9 00:22:43.836 Malloc10 00:22:43.836 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.836 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:43.837 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:43.837 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:43.837 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=833847 00:22:43.837 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 833847 /var/tmp/bdevperf.sock 00:22:43.837 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 833847 ']' 00:22:43.837 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:43.837 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:43.837 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:43.837 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:43.837 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:43.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:43.837 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:43.837 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:43.837 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:43.837 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:43.837 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:43.837 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:43.837 { 00:22:43.837 "params": { 00:22:43.837 "name": "Nvme$subsystem", 00:22:43.837 "trtype": "$TEST_TRANSPORT", 00:22:43.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.837 "adrfam": "ipv4", 00:22:43.837 "trsvcid": "$NVMF_PORT", 00:22:43.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.837 "hdgst": ${hdgst:-false}, 00:22:43.837 "ddgst": ${ddgst:-false} 00:22:43.837 }, 00:22:43.837 "method": "bdev_nvme_attach_controller" 00:22:43.837 } 00:22:43.837 EOF 00:22:43.837 )") 00:22:43.837 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:43.837 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:43.837 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:43.837 { 00:22:43.837 "params": { 00:22:43.837 "name": "Nvme$subsystem", 00:22:43.837 "trtype": "$TEST_TRANSPORT", 00:22:43.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.837 "adrfam": "ipv4", 00:22:43.837 "trsvcid": "$NVMF_PORT", 00:22:43.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.837 "hdgst": ${hdgst:-false}, 00:22:43.837 "ddgst": ${ddgst:-false} 00:22:43.837 }, 00:22:43.837 "method": "bdev_nvme_attach_controller" 00:22:43.837 } 00:22:43.837 EOF 00:22:43.837 )") 00:22:43.837 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:43.837 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:43.837 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:43.837 { 00:22:43.837 "params": { 00:22:43.837 "name": "Nvme$subsystem", 00:22:43.837 "trtype": "$TEST_TRANSPORT", 00:22:43.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.837 "adrfam": "ipv4", 00:22:43.837 "trsvcid": "$NVMF_PORT", 00:22:43.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.837 "hdgst": ${hdgst:-false}, 00:22:43.837 "ddgst": ${ddgst:-false} 00:22:43.837 }, 00:22:43.837 "method": "bdev_nvme_attach_controller" 00:22:43.837 } 00:22:43.837 EOF 00:22:43.837 )") 00:22:43.837 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:43.837 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:43.837 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:43.837 { 00:22:43.837 "params": { 00:22:43.837 "name": "Nvme$subsystem", 00:22:43.837 "trtype": "$TEST_TRANSPORT", 00:22:43.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.837 "adrfam": "ipv4", 00:22:43.837 "trsvcid": "$NVMF_PORT", 00:22:43.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.837 "hdgst": ${hdgst:-false}, 00:22:43.837 "ddgst": ${ddgst:-false} 00:22:43.837 }, 00:22:43.837 "method": "bdev_nvme_attach_controller" 00:22:43.837 } 00:22:43.837 EOF 00:22:43.837 )") 00:22:43.837 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:43.837 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:43.837 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:43.837 { 00:22:43.837 "params": { 00:22:43.837 "name": "Nvme$subsystem", 00:22:43.837 "trtype": "$TEST_TRANSPORT", 00:22:43.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.837 "adrfam": "ipv4", 00:22:43.837 "trsvcid": "$NVMF_PORT", 00:22:43.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.837 "hdgst": ${hdgst:-false}, 00:22:43.837 "ddgst": ${ddgst:-false} 00:22:43.837 }, 00:22:43.837 "method": "bdev_nvme_attach_controller" 00:22:43.837 } 00:22:43.837 EOF 00:22:43.837 )") 00:22:43.837 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:43.837 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:43.837 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:43.837 { 00:22:43.837 "params": { 00:22:43.837 "name": "Nvme$subsystem", 00:22:43.837 "trtype": "$TEST_TRANSPORT", 00:22:43.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.837 "adrfam": "ipv4", 00:22:43.837 "trsvcid": "$NVMF_PORT", 00:22:43.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.837 "hdgst": ${hdgst:-false}, 00:22:43.837 "ddgst": ${ddgst:-false} 00:22:43.837 }, 00:22:43.837 "method": "bdev_nvme_attach_controller" 00:22:43.837 } 00:22:43.837 EOF 00:22:43.837 )") 00:22:43.837 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:43.837 [2024-07-25 19:13:36.253651] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:43.837 [2024-07-25 19:13:36.253697] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:43.837 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:43.837 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:43.837 { 00:22:43.837 "params": { 00:22:43.837 "name": "Nvme$subsystem", 00:22:43.837 "trtype": "$TEST_TRANSPORT", 00:22:43.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.837 "adrfam": "ipv4", 00:22:43.837 "trsvcid": "$NVMF_PORT", 00:22:43.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.837 "hdgst": ${hdgst:-false}, 00:22:43.837 "ddgst": ${ddgst:-false} 00:22:43.837 }, 00:22:43.837 "method": "bdev_nvme_attach_controller" 00:22:43.837 } 00:22:43.837 EOF 00:22:43.838 )") 00:22:43.838 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:43.838 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:43.838 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:43.838 { 00:22:43.838 "params": { 00:22:43.838 "name": "Nvme$subsystem", 00:22:43.838 "trtype": "$TEST_TRANSPORT", 00:22:43.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.838 "adrfam": "ipv4", 00:22:43.838 "trsvcid": "$NVMF_PORT", 00:22:43.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.838 "hdgst": ${hdgst:-false}, 00:22:43.838 "ddgst": ${ddgst:-false} 00:22:43.838 }, 00:22:43.838 "method": "bdev_nvme_attach_controller" 00:22:43.838 } 00:22:43.838 EOF 00:22:43.838 )") 00:22:43.838 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:43.838 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:43.838 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:43.838 { 00:22:43.838 "params": { 00:22:43.838 "name": "Nvme$subsystem", 00:22:43.838 "trtype": "$TEST_TRANSPORT", 00:22:43.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.838 "adrfam": "ipv4", 00:22:43.838 "trsvcid": "$NVMF_PORT", 00:22:43.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.838 "hdgst": ${hdgst:-false}, 00:22:43.838 "ddgst": ${ddgst:-false} 00:22:43.838 }, 00:22:43.838 "method": "bdev_nvme_attach_controller" 00:22:43.838 } 00:22:43.838 EOF 00:22:43.838 )") 00:22:43.838 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:43.838 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:43.838 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:43.838 { 00:22:43.838 "params": { 00:22:43.838 "name": "Nvme$subsystem", 00:22:43.838 "trtype": "$TEST_TRANSPORT", 00:22:43.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.838 "adrfam": "ipv4", 00:22:43.838 "trsvcid": "$NVMF_PORT", 00:22:43.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.838 "hdgst": ${hdgst:-false}, 00:22:43.838 "ddgst": ${ddgst:-false} 00:22:43.838 }, 00:22:43.838 "method": "bdev_nvme_attach_controller" 00:22:43.838 } 00:22:43.838 EOF 00:22:43.838 )") 00:22:43.838 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:43.838 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.838 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:43.838 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:43.838 19:13:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:43.838 "params": { 00:22:43.838 "name": "Nvme1", 00:22:43.838 "trtype": "rdma", 00:22:43.838 "traddr": "192.168.100.8", 00:22:43.838 "adrfam": "ipv4", 00:22:43.838 "trsvcid": "4420", 00:22:43.838 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.838 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:43.838 "hdgst": false, 00:22:43.838 "ddgst": false 00:22:43.838 }, 00:22:43.838 "method": "bdev_nvme_attach_controller" 00:22:43.838 },{ 00:22:43.838 "params": { 00:22:43.838 "name": "Nvme2", 00:22:43.838 "trtype": "rdma", 00:22:43.838 "traddr": "192.168.100.8", 00:22:43.838 "adrfam": "ipv4", 00:22:43.838 "trsvcid": "4420", 00:22:43.838 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:43.838 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:43.838 "hdgst": false, 00:22:43.838 "ddgst": false 00:22:43.838 }, 00:22:43.838 "method": "bdev_nvme_attach_controller" 00:22:43.838 },{ 00:22:43.838 "params": { 00:22:43.838 "name": "Nvme3", 00:22:43.838 "trtype": "rdma", 00:22:43.838 "traddr": "192.168.100.8", 00:22:43.838 "adrfam": "ipv4", 00:22:43.838 "trsvcid": "4420", 00:22:43.838 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:43.838 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:43.838 "hdgst": false, 00:22:43.838 "ddgst": false 00:22:43.838 }, 00:22:43.838 "method": "bdev_nvme_attach_controller" 00:22:43.838 },{ 00:22:43.838 "params": { 00:22:43.838 "name": "Nvme4", 00:22:43.838 "trtype": "rdma", 00:22:43.838 "traddr": "192.168.100.8", 00:22:43.838 "adrfam": "ipv4", 00:22:43.838 "trsvcid": "4420", 00:22:43.838 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:43.838 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:43.838 "hdgst": false, 00:22:43.838 "ddgst": false 00:22:43.838 }, 00:22:43.838 "method": "bdev_nvme_attach_controller" 00:22:43.838 },{ 00:22:43.838 "params": { 00:22:43.838 "name": "Nvme5", 00:22:43.838 "trtype": "rdma", 00:22:43.838 "traddr": "192.168.100.8", 00:22:43.838 "adrfam": "ipv4", 00:22:43.838 "trsvcid": "4420", 00:22:43.838 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:43.838 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:43.838 "hdgst": false, 00:22:43.838 "ddgst": false 00:22:43.838 }, 00:22:43.838 "method": "bdev_nvme_attach_controller" 00:22:43.838 },{ 00:22:43.838 "params": { 00:22:43.838 "name": "Nvme6", 00:22:43.838 "trtype": "rdma", 00:22:43.838 "traddr": "192.168.100.8", 00:22:43.838 "adrfam": "ipv4", 00:22:43.838 "trsvcid": "4420", 00:22:43.838 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:43.838 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:43.838 "hdgst": false, 00:22:43.838 "ddgst": false 00:22:43.838 }, 00:22:43.838 "method": "bdev_nvme_attach_controller" 00:22:43.838 },{ 00:22:43.838 "params": { 00:22:43.838 "name": "Nvme7", 00:22:43.838 "trtype": "rdma", 00:22:43.838 "traddr": "192.168.100.8", 00:22:43.838 "adrfam": "ipv4", 00:22:43.838 "trsvcid": "4420", 00:22:43.838 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:43.838 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:43.838 "hdgst": false, 00:22:43.838 "ddgst": false 00:22:43.838 }, 00:22:43.838 "method": "bdev_nvme_attach_controller" 00:22:43.838 },{ 00:22:43.838 "params": { 00:22:43.838 "name": "Nvme8", 00:22:43.838 "trtype": "rdma", 00:22:43.838 "traddr": "192.168.100.8", 00:22:43.838 "adrfam": "ipv4", 00:22:43.838 "trsvcid": "4420", 00:22:43.838 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:43.838 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:43.838 "hdgst": false, 00:22:43.838 "ddgst": false 00:22:43.838 }, 00:22:43.838 "method": "bdev_nvme_attach_controller" 00:22:43.838 },{ 00:22:43.838 "params": { 00:22:43.838 "name": "Nvme9", 00:22:43.838 "trtype": "rdma", 00:22:43.838 "traddr": "192.168.100.8", 00:22:43.838 "adrfam": "ipv4", 00:22:43.838 "trsvcid": "4420", 00:22:43.838 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:43.838 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:43.838 "hdgst": false, 00:22:43.838 "ddgst": false 00:22:43.838 }, 00:22:43.838 "method": "bdev_nvme_attach_controller" 00:22:43.838 },{ 00:22:43.838 "params": { 00:22:43.838 "name": "Nvme10", 00:22:43.838 "trtype": "rdma", 00:22:43.838 "traddr": "192.168.100.8", 00:22:43.838 "adrfam": "ipv4", 00:22:43.839 "trsvcid": "4420", 00:22:43.839 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:43.839 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:43.839 "hdgst": false, 00:22:43.839 "ddgst": false 00:22:43.839 }, 00:22:43.839 "method": "bdev_nvme_attach_controller" 00:22:43.839 }' 00:22:44.097 [2024-07-25 19:13:36.322945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.097 [2024-07-25 19:13:36.394952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.032 19:13:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:45.032 19:13:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:45.032 19:13:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:45.032 19:13:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.032 19:13:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:45.032 19:13:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.032 19:13:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 833847 00:22:45.032 19:13:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:22:45.032 19:13:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:22:45.970 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 833847 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:45.970 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 833542 00:22:45.970 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:45.970 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:45.970 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:45.970 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:45.970 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:45.970 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:45.970 { 00:22:45.970 "params": { 00:22:45.970 "name": "Nvme$subsystem", 00:22:45.970 "trtype": "$TEST_TRANSPORT", 00:22:45.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.970 "adrfam": "ipv4", 00:22:45.970 "trsvcid": "$NVMF_PORT", 00:22:45.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.970 "hdgst": ${hdgst:-false}, 00:22:45.970 "ddgst": ${ddgst:-false} 00:22:45.970 }, 00:22:45.970 "method": "bdev_nvme_attach_controller" 00:22:45.970 } 00:22:45.970 EOF 00:22:45.970 )") 00:22:45.970 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:45.970 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:45.970 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:45.970 { 00:22:45.970 "params": { 00:22:45.970 "name": "Nvme$subsystem", 00:22:45.970 "trtype": "$TEST_TRANSPORT", 00:22:45.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.970 "adrfam": "ipv4", 00:22:45.970 "trsvcid": "$NVMF_PORT", 00:22:45.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.970 "hdgst": ${hdgst:-false}, 00:22:45.970 "ddgst": ${ddgst:-false} 00:22:45.970 }, 00:22:45.970 "method": "bdev_nvme_attach_controller" 00:22:45.970 } 00:22:45.970 EOF 00:22:45.970 )") 00:22:45.970 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:45.970 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:45.970 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:45.970 { 00:22:45.970 "params": { 00:22:45.970 "name": "Nvme$subsystem", 00:22:45.970 "trtype": "$TEST_TRANSPORT", 00:22:45.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.970 "adrfam": "ipv4", 00:22:45.970 "trsvcid": "$NVMF_PORT", 00:22:45.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.970 "hdgst": ${hdgst:-false}, 00:22:45.970 "ddgst": ${ddgst:-false} 00:22:45.970 }, 00:22:45.970 "method": "bdev_nvme_attach_controller" 00:22:45.970 } 00:22:45.970 EOF 00:22:45.970 )") 00:22:45.970 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:45.970 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:45.970 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:45.970 { 00:22:45.970 "params": { 00:22:45.970 "name": "Nvme$subsystem", 00:22:45.970 "trtype": "$TEST_TRANSPORT", 00:22:45.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.970 "adrfam": "ipv4", 00:22:45.970 "trsvcid": "$NVMF_PORT", 00:22:45.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.970 "hdgst": ${hdgst:-false}, 00:22:45.970 "ddgst": ${ddgst:-false} 00:22:45.970 }, 00:22:45.970 "method": "bdev_nvme_attach_controller" 00:22:45.970 } 00:22:45.970 EOF 00:22:45.970 )") 00:22:45.970 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:45.970 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:45.970 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:45.970 { 00:22:45.970 "params": { 00:22:45.970 "name": "Nvme$subsystem", 00:22:45.970 "trtype": "$TEST_TRANSPORT", 00:22:45.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.970 "adrfam": "ipv4", 00:22:45.970 "trsvcid": "$NVMF_PORT", 00:22:45.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.970 "hdgst": ${hdgst:-false}, 00:22:45.970 "ddgst": ${ddgst:-false} 00:22:45.970 }, 00:22:45.970 "method": "bdev_nvme_attach_controller" 00:22:45.970 } 00:22:45.970 EOF 00:22:45.970 )") 00:22:45.970 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:45.970 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:45.970 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:45.970 { 00:22:45.970 "params": { 00:22:45.970 "name": "Nvme$subsystem", 00:22:45.970 "trtype": "$TEST_TRANSPORT", 00:22:45.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.970 "adrfam": "ipv4", 00:22:45.971 "trsvcid": "$NVMF_PORT", 00:22:45.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.971 "hdgst": ${hdgst:-false}, 00:22:45.971 "ddgst": ${ddgst:-false} 00:22:45.971 }, 00:22:45.971 "method": "bdev_nvme_attach_controller" 00:22:45.971 } 00:22:45.971 EOF 00:22:45.971 )") 00:22:45.971 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:45.971 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:45.971 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:45.971 { 00:22:45.971 "params": { 00:22:45.971 "name": "Nvme$subsystem", 00:22:45.971 "trtype": "$TEST_TRANSPORT", 00:22:45.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.971 "adrfam": "ipv4", 00:22:45.971 "trsvcid": "$NVMF_PORT", 00:22:45.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.971 "hdgst": ${hdgst:-false}, 00:22:45.971 "ddgst": ${ddgst:-false} 00:22:45.971 }, 00:22:45.971 "method": "bdev_nvme_attach_controller" 00:22:45.971 } 00:22:45.971 EOF 00:22:45.971 )") 00:22:45.971 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:45.971 [2024-07-25 19:13:38.305641] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:45.971 [2024-07-25 19:13:38.305692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid834310 ] 00:22:45.971 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:45.971 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:45.971 { 00:22:45.971 "params": { 00:22:45.971 "name": "Nvme$subsystem", 00:22:45.971 "trtype": "$TEST_TRANSPORT", 00:22:45.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.971 "adrfam": "ipv4", 00:22:45.971 "trsvcid": "$NVMF_PORT", 00:22:45.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.971 "hdgst": ${hdgst:-false}, 00:22:45.971 "ddgst": ${ddgst:-false} 00:22:45.971 }, 00:22:45.971 "method": "bdev_nvme_attach_controller" 00:22:45.971 } 00:22:45.971 EOF 00:22:45.971 )") 00:22:45.971 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:45.971 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:45.971 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:45.971 { 00:22:45.971 "params": { 00:22:45.971 "name": "Nvme$subsystem", 00:22:45.971 "trtype": "$TEST_TRANSPORT", 00:22:45.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.971 "adrfam": "ipv4", 00:22:45.971 "trsvcid": "$NVMF_PORT", 00:22:45.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.971 "hdgst": ${hdgst:-false}, 00:22:45.971 "ddgst": ${ddgst:-false} 00:22:45.971 }, 00:22:45.971 "method": "bdev_nvme_attach_controller" 00:22:45.971 } 00:22:45.971 EOF 00:22:45.971 )") 00:22:45.971 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:45.971 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:45.971 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:45.971 { 00:22:45.971 "params": { 00:22:45.971 "name": "Nvme$subsystem", 00:22:45.971 "trtype": "$TEST_TRANSPORT", 00:22:45.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.971 "adrfam": "ipv4", 00:22:45.971 "trsvcid": "$NVMF_PORT", 00:22:45.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.971 "hdgst": ${hdgst:-false}, 00:22:45.971 "ddgst": ${ddgst:-false} 00:22:45.971 }, 00:22:45.971 "method": "bdev_nvme_attach_controller" 00:22:45.971 } 00:22:45.971 EOF 00:22:45.971 )") 00:22:45.971 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:45.971 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:45.971 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.971 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:45.971 19:13:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:45.971 "params": { 00:22:45.971 "name": "Nvme1", 00:22:45.971 "trtype": "rdma", 00:22:45.971 "traddr": "192.168.100.8", 00:22:45.971 "adrfam": "ipv4", 00:22:45.971 "trsvcid": "4420", 00:22:45.971 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.971 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:45.971 "hdgst": false, 00:22:45.971 "ddgst": false 00:22:45.971 }, 00:22:45.971 "method": "bdev_nvme_attach_controller" 00:22:45.971 },{ 00:22:45.971 "params": { 00:22:45.971 "name": "Nvme2", 00:22:45.971 "trtype": "rdma", 00:22:45.971 "traddr": "192.168.100.8", 00:22:45.971 "adrfam": "ipv4", 00:22:45.971 "trsvcid": "4420", 00:22:45.971 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:45.971 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:45.971 "hdgst": false, 00:22:45.971 "ddgst": false 00:22:45.971 }, 00:22:45.971 "method": "bdev_nvme_attach_controller" 00:22:45.971 },{ 00:22:45.971 "params": { 00:22:45.971 "name": "Nvme3", 00:22:45.971 "trtype": "rdma", 00:22:45.971 "traddr": "192.168.100.8", 00:22:45.971 "adrfam": "ipv4", 00:22:45.971 "trsvcid": "4420", 00:22:45.971 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:45.971 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:45.971 "hdgst": false, 00:22:45.971 "ddgst": false 00:22:45.971 }, 00:22:45.971 "method": "bdev_nvme_attach_controller" 00:22:45.971 },{ 00:22:45.971 "params": { 00:22:45.971 "name": "Nvme4", 00:22:45.971 "trtype": "rdma", 00:22:45.971 "traddr": "192.168.100.8", 00:22:45.971 "adrfam": "ipv4", 00:22:45.971 "trsvcid": "4420", 00:22:45.971 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:45.971 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:45.971 "hdgst": false, 00:22:45.971 "ddgst": false 00:22:45.971 }, 00:22:45.971 "method": "bdev_nvme_attach_controller" 00:22:45.971 },{ 00:22:45.971 "params": { 00:22:45.971 "name": "Nvme5", 00:22:45.971 "trtype": "rdma", 00:22:45.971 "traddr": "192.168.100.8", 00:22:45.971 "adrfam": "ipv4", 00:22:45.971 "trsvcid": "4420", 00:22:45.971 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:45.971 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:45.971 "hdgst": false, 00:22:45.971 "ddgst": false 00:22:45.971 }, 00:22:45.971 "method": "bdev_nvme_attach_controller" 00:22:45.971 },{ 00:22:45.971 "params": { 00:22:45.971 "name": "Nvme6", 00:22:45.971 "trtype": "rdma", 00:22:45.971 "traddr": "192.168.100.8", 00:22:45.971 "adrfam": "ipv4", 00:22:45.971 "trsvcid": "4420", 00:22:45.971 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:45.971 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:45.971 "hdgst": false, 00:22:45.971 "ddgst": false 00:22:45.971 }, 00:22:45.971 "method": "bdev_nvme_attach_controller" 00:22:45.971 },{ 00:22:45.971 "params": { 00:22:45.971 "name": "Nvme7", 00:22:45.971 "trtype": "rdma", 00:22:45.972 "traddr": "192.168.100.8", 00:22:45.972 "adrfam": "ipv4", 00:22:45.972 "trsvcid": "4420", 00:22:45.972 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:45.972 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:45.972 "hdgst": false, 00:22:45.972 "ddgst": false 00:22:45.972 }, 00:22:45.972 "method": "bdev_nvme_attach_controller" 00:22:45.972 },{ 00:22:45.972 "params": { 00:22:45.972 "name": "Nvme8", 00:22:45.972 "trtype": "rdma", 00:22:45.972 "traddr": "192.168.100.8", 00:22:45.972 "adrfam": "ipv4", 00:22:45.972 "trsvcid": "4420", 00:22:45.972 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:45.972 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:45.972 "hdgst": false, 00:22:45.972 "ddgst": false 00:22:45.972 }, 00:22:45.972 "method": "bdev_nvme_attach_controller" 00:22:45.972 },{ 00:22:45.972 "params": { 00:22:45.972 "name": "Nvme9", 00:22:45.972 "trtype": "rdma", 00:22:45.972 "traddr": "192.168.100.8", 00:22:45.972 "adrfam": "ipv4", 00:22:45.972 "trsvcid": "4420", 00:22:45.972 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:45.972 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:45.972 "hdgst": false, 00:22:45.972 "ddgst": false 00:22:45.972 }, 00:22:45.972 "method": "bdev_nvme_attach_controller" 00:22:45.972 },{ 00:22:45.972 "params": { 00:22:45.972 "name": "Nvme10", 00:22:45.972 "trtype": "rdma", 00:22:45.972 "traddr": "192.168.100.8", 00:22:45.972 "adrfam": "ipv4", 00:22:45.972 "trsvcid": "4420", 00:22:45.972 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:45.972 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:45.972 "hdgst": false, 00:22:45.972 "ddgst": false 00:22:45.972 }, 00:22:45.972 "method": "bdev_nvme_attach_controller" 00:22:45.972 }' 00:22:45.972 [2024-07-25 19:13:38.376058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.231 [2024-07-25 19:13:38.450153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.165 Running I/O for 1 seconds... 00:22:48.101 00:22:48.101 Latency(us) 00:22:48.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.101 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.101 Verification LBA range: start 0x0 length 0x400 00:22:48.101 Nvme1n1 : 1.17 345.66 21.60 0.00 0.00 174039.28 11739.49 251658.24 00:22:48.101 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.101 Verification LBA range: start 0x0 length 0x400 00:22:48.101 Nvme2n1 : 1.18 378.55 23.66 0.00 0.00 164278.95 7180.47 178713.82 00:22:48.101 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.101 Verification LBA range: start 0x0 length 0x400 00:22:48.101 Nvme3n1 : 1.18 386.61 24.16 0.00 0.00 158624.46 5755.77 167772.16 00:22:48.101 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.101 Verification LBA range: start 0x0 length 0x400 00:22:48.101 Nvme4n1 : 1.19 377.81 23.61 0.00 0.00 160008.65 7693.36 165948.55 00:22:48.101 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.101 Verification LBA range: start 0x0 length 0x400 00:22:48.101 Nvme5n1 : 1.19 377.35 23.58 0.00 0.00 158209.14 8206.25 154095.08 00:22:48.101 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.101 Verification LBA range: start 0x0 length 0x400 00:22:48.101 Nvme6n1 : 1.19 376.97 23.56 0.00 0.00 155768.72 8605.16 146800.64 00:22:48.101 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.101 Verification LBA range: start 0x0 length 0x400 00:22:48.101 Nvme7n1 : 1.19 376.54 23.53 0.00 0.00 153855.94 9061.06 136770.78 00:22:48.101 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.101 Verification LBA range: start 0x0 length 0x400 00:22:48.101 Nvme8n1 : 1.19 384.56 24.04 0.00 0.00 148164.68 4758.48 129476.34 00:22:48.101 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.101 Verification LBA range: start 0x0 length 0x400 00:22:48.101 Nvme9n1 : 1.19 375.75 23.48 0.00 0.00 149541.72 9858.89 119446.48 00:22:48.101 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.101 Verification LBA range: start 0x0 length 0x400 00:22:48.101 Nvme10n1 : 1.19 267.92 16.75 0.00 0.00 207145.32 1346.34 413959.57 00:22:48.101 =================================================================================================================== 00:22:48.101 Total : 3647.71 227.98 0.00 0.00 161505.99 1346.34 413959.57 00:22:48.360 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:22:48.360 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:48.360 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:48.360 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:48.360 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:48.360 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:48.360 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:22:48.360 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:48.360 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:48.360 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:22:48.360 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:48.360 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:48.360 rmmod nvme_rdma 00:22:48.360 rmmod nvme_fabrics 00:22:48.619 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:48.619 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:22:48.619 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:22:48.619 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 833542 ']' 00:22:48.619 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 833542 00:22:48.619 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 833542 ']' 00:22:48.619 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 833542 00:22:48.619 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:22:48.619 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:48.619 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 833542 00:22:48.619 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:48.619 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:48.619 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 833542' 00:22:48.619 killing process with pid 833542 00:22:48.619 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 833542 00:22:48.619 19:13:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 833542 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:49.189 00:22:49.189 real 0m12.623s 00:22:49.189 user 0m30.841s 00:22:49.189 sys 0m5.448s 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:49.189 ************************************ 00:22:49.189 END TEST nvmf_shutdown_tc1 00:22:49.189 ************************************ 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:49.189 ************************************ 00:22:49.189 START TEST nvmf_shutdown_tc2 00:22:49.189 ************************************ 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:22:49.189 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:22:49.189 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:22:49.189 Found net devices under 0000:af:00.0: mlx_0_0 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:49.189 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:22:49.190 Found net devices under 0000:af:00.1: mlx_0_1 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # rdma_device_init 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # uname 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:49.190 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:49.190 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:22:49.190 altname enp175s0f0np0 00:22:49.190 altname ens801f0np0 00:22:49.190 inet 192.168.100.8/24 scope global mlx_0_0 00:22:49.190 valid_lft forever preferred_lft forever 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:49.190 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:49.190 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:22:49.190 altname enp175s0f1np1 00:22:49.190 altname ens801f1np1 00:22:49.190 inet 192.168.100.9/24 scope global mlx_0_1 00:22:49.190 valid_lft forever preferred_lft forever 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:49.190 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:49.191 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:49.191 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:49.191 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:49.191 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:49.191 192.168.100.9' 00:22:49.191 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:49.191 192.168.100.9' 00:22:49.191 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # head -n 1 00:22:49.191 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:49.191 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:49.191 192.168.100.9' 00:22:49.191 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # tail -n +2 00:22:49.191 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # head -n 1 00:22:49.191 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:49.191 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:49.191 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:49.191 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:49.191 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:49.191 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:49.191 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:49.191 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:49.191 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:49.191 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:49.450 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=834890 00:22:49.450 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 834890 00:22:49.450 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:49.450 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 834890 ']' 00:22:49.450 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.450 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:49.450 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.450 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:49.450 19:13:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:49.450 [2024-07-25 19:13:41.704654] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:49.450 [2024-07-25 19:13:41.704699] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.450 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.450 [2024-07-25 19:13:41.774106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:49.450 [2024-07-25 19:13:41.852707] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.450 [2024-07-25 19:13:41.852743] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.450 [2024-07-25 19:13:41.852750] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.450 [2024-07-25 19:13:41.852757] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.450 [2024-07-25 19:13:41.852762] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.450 [2024-07-25 19:13:41.852886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.450 [2024-07-25 19:13:41.853004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:49.450 [2024-07-25 19:13:41.853110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.450 [2024-07-25 19:13:41.853111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:50.387 [2024-07-25 19:13:42.602561] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f1a0f0/0x1f1e5e0) succeed. 00:22:50.387 [2024-07-25 19:13:42.612022] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f1b730/0x1f5fc80) succeed. 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.387 19:13:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:50.387 Malloc1 00:22:50.387 [2024-07-25 19:13:42.823237] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:50.387 Malloc2 00:22:50.646 Malloc3 00:22:50.646 Malloc4 00:22:50.646 Malloc5 00:22:50.646 Malloc6 00:22:50.646 Malloc7 00:22:50.646 Malloc8 00:22:50.905 Malloc9 00:22:50.905 Malloc10 00:22:50.905 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.905 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:50.905 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:50.905 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:50.905 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=835172 00:22:50.905 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 835172 /var/tmp/bdevperf.sock 00:22:50.905 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:50.905 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 835172 ']' 00:22:50.905 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:50.905 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:50.905 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:50.905 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:22:50.905 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:50.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:50.905 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:22:50.905 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:50.905 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:50.905 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:50.905 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:50.905 { 00:22:50.905 "params": { 00:22:50.905 "name": "Nvme$subsystem", 00:22:50.905 "trtype": "$TEST_TRANSPORT", 00:22:50.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.905 "adrfam": "ipv4", 00:22:50.905 "trsvcid": "$NVMF_PORT", 00:22:50.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.905 "hdgst": ${hdgst:-false}, 00:22:50.905 "ddgst": ${ddgst:-false} 00:22:50.905 }, 00:22:50.905 "method": "bdev_nvme_attach_controller" 00:22:50.905 } 00:22:50.905 EOF 00:22:50.905 )") 00:22:50.905 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:50.905 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:50.905 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:50.905 { 00:22:50.905 "params": { 00:22:50.905 "name": "Nvme$subsystem", 00:22:50.905 "trtype": "$TEST_TRANSPORT", 00:22:50.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.905 "adrfam": "ipv4", 00:22:50.905 "trsvcid": "$NVMF_PORT", 00:22:50.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.905 "hdgst": ${hdgst:-false}, 00:22:50.905 "ddgst": ${ddgst:-false} 00:22:50.905 }, 00:22:50.905 "method": "bdev_nvme_attach_controller" 00:22:50.905 } 00:22:50.906 EOF 00:22:50.906 )") 00:22:50.906 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:50.906 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:50.906 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:50.906 { 00:22:50.906 "params": { 00:22:50.906 "name": "Nvme$subsystem", 00:22:50.906 "trtype": "$TEST_TRANSPORT", 00:22:50.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.906 "adrfam": "ipv4", 00:22:50.906 "trsvcid": "$NVMF_PORT", 00:22:50.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.906 "hdgst": ${hdgst:-false}, 00:22:50.906 "ddgst": ${ddgst:-false} 00:22:50.906 }, 00:22:50.906 "method": "bdev_nvme_attach_controller" 00:22:50.906 } 00:22:50.906 EOF 00:22:50.906 )") 00:22:50.906 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:50.906 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:50.906 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:50.906 { 00:22:50.906 "params": { 00:22:50.906 "name": "Nvme$subsystem", 00:22:50.906 "trtype": "$TEST_TRANSPORT", 00:22:50.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.906 "adrfam": "ipv4", 00:22:50.906 "trsvcid": "$NVMF_PORT", 00:22:50.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.906 "hdgst": ${hdgst:-false}, 00:22:50.906 "ddgst": ${ddgst:-false} 00:22:50.906 }, 00:22:50.906 "method": "bdev_nvme_attach_controller" 00:22:50.906 } 00:22:50.906 EOF 00:22:50.906 )") 00:22:50.906 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:50.906 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:50.906 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:50.906 { 00:22:50.906 "params": { 00:22:50.906 "name": "Nvme$subsystem", 00:22:50.906 "trtype": "$TEST_TRANSPORT", 00:22:50.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.906 "adrfam": "ipv4", 00:22:50.906 "trsvcid": "$NVMF_PORT", 00:22:50.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.906 "hdgst": ${hdgst:-false}, 00:22:50.906 "ddgst": ${ddgst:-false} 00:22:50.906 }, 00:22:50.906 "method": "bdev_nvme_attach_controller" 00:22:50.906 } 00:22:50.906 EOF 00:22:50.906 )") 00:22:50.906 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:50.906 [2024-07-25 19:13:43.284100] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:50.906 [2024-07-25 19:13:43.284148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid835172 ] 00:22:50.906 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:50.906 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:50.906 { 00:22:50.906 "params": { 00:22:50.906 "name": "Nvme$subsystem", 00:22:50.906 "trtype": "$TEST_TRANSPORT", 00:22:50.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.906 "adrfam": "ipv4", 00:22:50.906 "trsvcid": "$NVMF_PORT", 00:22:50.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.906 "hdgst": ${hdgst:-false}, 00:22:50.906 "ddgst": ${ddgst:-false} 00:22:50.906 }, 00:22:50.906 "method": "bdev_nvme_attach_controller" 00:22:50.906 } 00:22:50.906 EOF 00:22:50.906 )") 00:22:50.906 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:50.906 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:50.906 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:50.906 { 00:22:50.906 "params": { 00:22:50.906 "name": "Nvme$subsystem", 00:22:50.906 "trtype": "$TEST_TRANSPORT", 00:22:50.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.906 "adrfam": "ipv4", 00:22:50.906 "trsvcid": "$NVMF_PORT", 00:22:50.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.906 "hdgst": ${hdgst:-false}, 00:22:50.906 "ddgst": ${ddgst:-false} 00:22:50.906 }, 00:22:50.906 "method": "bdev_nvme_attach_controller" 00:22:50.906 } 00:22:50.906 EOF 00:22:50.906 )") 00:22:50.906 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:50.906 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:50.906 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:50.906 { 00:22:50.906 "params": { 00:22:50.906 "name": "Nvme$subsystem", 00:22:50.906 "trtype": "$TEST_TRANSPORT", 00:22:50.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.906 "adrfam": "ipv4", 00:22:50.906 "trsvcid": "$NVMF_PORT", 00:22:50.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.906 "hdgst": ${hdgst:-false}, 00:22:50.906 "ddgst": ${ddgst:-false} 00:22:50.906 }, 00:22:50.906 "method": "bdev_nvme_attach_controller" 00:22:50.906 } 00:22:50.906 EOF 00:22:50.906 )") 00:22:50.906 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:50.906 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:50.906 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:50.906 { 00:22:50.906 "params": { 00:22:50.906 "name": "Nvme$subsystem", 00:22:50.906 "trtype": "$TEST_TRANSPORT", 00:22:50.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.906 "adrfam": "ipv4", 00:22:50.906 "trsvcid": "$NVMF_PORT", 00:22:50.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.906 "hdgst": ${hdgst:-false}, 00:22:50.906 "ddgst": ${ddgst:-false} 00:22:50.906 }, 00:22:50.906 "method": "bdev_nvme_attach_controller" 00:22:50.906 } 00:22:50.906 EOF 00:22:50.906 )") 00:22:50.906 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:50.906 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.906 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:50.906 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:50.906 { 00:22:50.906 "params": { 00:22:50.906 "name": "Nvme$subsystem", 00:22:50.906 "trtype": "$TEST_TRANSPORT", 00:22:50.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.906 "adrfam": "ipv4", 00:22:50.906 "trsvcid": "$NVMF_PORT", 00:22:50.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.906 "hdgst": ${hdgst:-false}, 00:22:50.906 "ddgst": ${ddgst:-false} 00:22:50.906 }, 00:22:50.906 "method": "bdev_nvme_attach_controller" 00:22:50.906 } 00:22:50.906 EOF 00:22:50.906 )") 00:22:50.906 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:50.906 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:22:50.906 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:22:50.906 19:13:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:50.906 "params": { 00:22:50.906 "name": "Nvme1", 00:22:50.906 "trtype": "rdma", 00:22:50.906 "traddr": "192.168.100.8", 00:22:50.906 "adrfam": "ipv4", 00:22:50.906 "trsvcid": "4420", 00:22:50.906 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.906 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:50.906 "hdgst": false, 00:22:50.906 "ddgst": false 00:22:50.906 }, 00:22:50.906 "method": "bdev_nvme_attach_controller" 00:22:50.906 },{ 00:22:50.906 "params": { 00:22:50.906 "name": "Nvme2", 00:22:50.906 "trtype": "rdma", 00:22:50.906 "traddr": "192.168.100.8", 00:22:50.906 "adrfam": "ipv4", 00:22:50.906 "trsvcid": "4420", 00:22:50.906 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:50.906 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:50.906 "hdgst": false, 00:22:50.906 "ddgst": false 00:22:50.906 }, 00:22:50.906 "method": "bdev_nvme_attach_controller" 00:22:50.906 },{ 00:22:50.906 "params": { 00:22:50.906 "name": "Nvme3", 00:22:50.906 "trtype": "rdma", 00:22:50.906 "traddr": "192.168.100.8", 00:22:50.906 "adrfam": "ipv4", 00:22:50.906 "trsvcid": "4420", 00:22:50.906 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:50.906 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:50.906 "hdgst": false, 00:22:50.906 "ddgst": false 00:22:50.906 }, 00:22:50.906 "method": "bdev_nvme_attach_controller" 00:22:50.906 },{ 00:22:50.906 "params": { 00:22:50.906 "name": "Nvme4", 00:22:50.906 "trtype": "rdma", 00:22:50.907 "traddr": "192.168.100.8", 00:22:50.907 "adrfam": "ipv4", 00:22:50.907 "trsvcid": "4420", 00:22:50.907 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:50.907 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:50.907 "hdgst": false, 00:22:50.907 "ddgst": false 00:22:50.907 }, 00:22:50.907 "method": "bdev_nvme_attach_controller" 00:22:50.907 },{ 00:22:50.907 "params": { 00:22:50.907 "name": "Nvme5", 00:22:50.907 "trtype": "rdma", 00:22:50.907 "traddr": "192.168.100.8", 00:22:50.907 "adrfam": "ipv4", 00:22:50.907 "trsvcid": "4420", 00:22:50.907 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:50.907 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:50.907 "hdgst": false, 00:22:50.907 "ddgst": false 00:22:50.907 }, 00:22:50.907 "method": "bdev_nvme_attach_controller" 00:22:50.907 },{ 00:22:50.907 "params": { 00:22:50.907 "name": "Nvme6", 00:22:50.907 "trtype": "rdma", 00:22:50.907 "traddr": "192.168.100.8", 00:22:50.907 "adrfam": "ipv4", 00:22:50.907 "trsvcid": "4420", 00:22:50.907 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:50.907 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:50.907 "hdgst": false, 00:22:50.907 "ddgst": false 00:22:50.907 }, 00:22:50.907 "method": "bdev_nvme_attach_controller" 00:22:50.907 },{ 00:22:50.907 "params": { 00:22:50.907 "name": "Nvme7", 00:22:50.907 "trtype": "rdma", 00:22:50.907 "traddr": "192.168.100.8", 00:22:50.907 "adrfam": "ipv4", 00:22:50.907 "trsvcid": "4420", 00:22:50.907 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:50.907 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:50.907 "hdgst": false, 00:22:50.907 "ddgst": false 00:22:50.907 }, 00:22:50.907 "method": "bdev_nvme_attach_controller" 00:22:50.907 },{ 00:22:50.907 "params": { 00:22:50.907 "name": "Nvme8", 00:22:50.907 "trtype": "rdma", 00:22:50.907 "traddr": "192.168.100.8", 00:22:50.907 "adrfam": "ipv4", 00:22:50.907 "trsvcid": "4420", 00:22:50.907 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:50.907 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:50.907 "hdgst": false, 00:22:50.907 "ddgst": false 00:22:50.907 }, 00:22:50.907 "method": "bdev_nvme_attach_controller" 00:22:50.907 },{ 00:22:50.907 "params": { 00:22:50.907 "name": "Nvme9", 00:22:50.907 "trtype": "rdma", 00:22:50.907 "traddr": "192.168.100.8", 00:22:50.907 "adrfam": "ipv4", 00:22:50.907 "trsvcid": "4420", 00:22:50.907 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:50.907 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:50.907 "hdgst": false, 00:22:50.907 "ddgst": false 00:22:50.907 }, 00:22:50.907 "method": "bdev_nvme_attach_controller" 00:22:50.907 },{ 00:22:50.907 "params": { 00:22:50.907 "name": "Nvme10", 00:22:50.907 "trtype": "rdma", 00:22:50.907 "traddr": "192.168.100.8", 00:22:50.907 "adrfam": "ipv4", 00:22:50.907 "trsvcid": "4420", 00:22:50.907 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:50.907 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:50.907 "hdgst": false, 00:22:50.907 "ddgst": false 00:22:50.907 }, 00:22:50.907 "method": "bdev_nvme_attach_controller" 00:22:50.907 }' 00:22:50.907 [2024-07-25 19:13:43.353314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.165 [2024-07-25 19:13:43.425406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.101 Running I/O for 10 seconds... 00:22:52.101 19:13:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:52.101 19:13:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:52.101 19:13:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:52.101 19:13:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.101 19:13:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:52.101 19:13:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.101 19:13:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:52.101 19:13:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:52.101 19:13:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:52.101 19:13:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:22:52.101 19:13:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:22:52.101 19:13:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:52.101 19:13:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:52.101 19:13:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:52.101 19:13:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:52.101 19:13:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.101 19:13:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:52.360 19:13:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.360 19:13:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=34 00:22:52.360 19:13:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 34 -ge 100 ']' 00:22:52.360 19:13:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:52.619 19:13:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:52.619 19:13:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:52.619 19:13:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:52.619 19:13:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:52.619 19:13:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.619 19:13:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:52.619 19:13:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.619 19:13:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=188 00:22:52.619 19:13:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 188 -ge 100 ']' 00:22:52.619 19:13:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:22:52.619 19:13:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:22:52.619 19:13:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:22:52.619 19:13:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 835172 00:22:52.619 19:13:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 835172 ']' 00:22:52.619 19:13:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 835172 00:22:52.619 19:13:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:52.619 19:13:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:52.619 19:13:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 835172 00:22:52.878 19:13:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:52.878 19:13:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:52.878 19:13:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 835172' 00:22:52.878 killing process with pid 835172 00:22:52.878 19:13:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 835172 00:22:52.878 19:13:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 835172 00:22:52.878 Received shutdown signal, test time was about 0.858763 seconds 00:22:52.878 00:22:52.878 Latency(us) 00:22:52.878 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.878 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.878 Verification LBA range: start 0x0 length 0x400 00:22:52.878 Nvme1n1 : 0.85 370.34 23.15 0.00 0.00 168763.21 7180.47 246187.41 00:22:52.878 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.878 Verification LBA range: start 0x0 length 0x400 00:22:52.878 Nvme2n1 : 0.85 380.45 23.78 0.00 0.00 160970.34 5157.40 175066.60 00:22:52.878 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.878 Verification LBA range: start 0x0 length 0x400 00:22:52.878 Nvme3n1 : 0.85 377.56 23.60 0.00 0.00 159020.61 9346.00 167772.16 00:22:52.878 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.878 Verification LBA range: start 0x0 length 0x400 00:22:52.878 Nvme4n1 : 0.85 377.03 23.56 0.00 0.00 156113.52 9630.94 160477.72 00:22:52.878 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.878 Verification LBA range: start 0x0 length 0x400 00:22:52.878 Nvme5n1 : 0.85 376.40 23.52 0.00 0.00 153738.73 10143.83 149536.06 00:22:52.878 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.878 Verification LBA range: start 0x0 length 0x400 00:22:52.878 Nvme6n1 : 0.85 375.85 23.49 0.00 0.00 150435.93 10599.74 141329.81 00:22:52.878 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.878 Verification LBA range: start 0x0 length 0x400 00:22:52.878 Nvme7n1 : 0.85 375.32 23.46 0.00 0.00 147293.05 10941.66 134035.37 00:22:52.878 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.878 Verification LBA range: start 0x0 length 0x400 00:22:52.878 Nvme8n1 : 0.85 374.77 23.42 0.00 0.00 144439.03 11283.59 125829.12 00:22:52.878 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.878 Verification LBA range: start 0x0 length 0x400 00:22:52.878 Nvme9n1 : 0.86 373.16 23.32 0.00 0.00 142112.19 2778.16 113519.75 00:22:52.878 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.878 Verification LBA range: start 0x0 length 0x400 00:22:52.878 Nvme10n1 : 0.84 227.47 14.22 0.00 0.00 228297.98 8434.20 341015.15 00:22:52.878 =================================================================================================================== 00:22:52.878 Total : 3608.36 225.52 0.00 0.00 158297.20 2778.16 341015.15 00:22:53.137 19:13:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:22:54.072 19:13:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 834890 00:22:54.072 19:13:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:22:54.072 19:13:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:54.072 19:13:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:54.072 19:13:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:54.072 19:13:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:54.072 19:13:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:54.072 19:13:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:22:54.072 19:13:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:54.072 19:13:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:54.072 19:13:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:22:54.072 19:13:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:54.072 19:13:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:54.072 rmmod nvme_rdma 00:22:54.072 rmmod nvme_fabrics 00:22:54.072 19:13:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:54.072 19:13:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:22:54.072 19:13:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:22:54.072 19:13:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 834890 ']' 00:22:54.072 19:13:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 834890 00:22:54.072 19:13:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 834890 ']' 00:22:54.072 19:13:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 834890 00:22:54.072 19:13:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:54.072 19:13:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:54.072 19:13:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 834890 00:22:54.331 19:13:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:54.331 19:13:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:54.331 19:13:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 834890' 00:22:54.331 killing process with pid 834890 00:22:54.331 19:13:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 834890 00:22:54.331 19:13:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 834890 00:22:54.591 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:54.591 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:54.591 00:22:54.591 real 0m5.580s 00:22:54.591 user 0m22.693s 00:22:54.591 sys 0m1.038s 00:22:54.591 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:54.591 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:54.591 ************************************ 00:22:54.591 END TEST nvmf_shutdown_tc2 00:22:54.591 ************************************ 00:22:54.591 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:54.591 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:54.591 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:54.591 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:54.851 ************************************ 00:22:54.851 START TEST nvmf_shutdown_tc3 00:22:54.851 ************************************ 00:22:54.851 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:22:54.851 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:22:54.851 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:54.851 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:54.851 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.851 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:54.851 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:54.851 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:22:54.852 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:22:54.852 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:22:54.852 Found net devices under 0000:af:00.0: mlx_0_0 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:22:54.852 Found net devices under 0000:af:00.1: mlx_0_1 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # rdma_device_init 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # uname 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:54.852 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:54.853 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:54.853 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:22:54.853 altname enp175s0f0np0 00:22:54.853 altname ens801f0np0 00:22:54.853 inet 192.168.100.8/24 scope global mlx_0_0 00:22:54.853 valid_lft forever preferred_lft forever 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:54.853 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:54.853 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:22:54.853 altname enp175s0f1np1 00:22:54.853 altname ens801f1np1 00:22:54.853 inet 192.168.100.9/24 scope global mlx_0_1 00:22:54.853 valid_lft forever preferred_lft forever 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:54.853 192.168.100.9' 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:54.853 192.168.100.9' 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # head -n 1 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:54.853 192.168.100.9' 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # tail -n +2 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # head -n 1 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:54.853 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:54.854 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:54.854 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:54.854 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:54.854 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:54.854 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:54.854 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:54.854 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:54.854 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:54.854 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=835987 00:22:54.854 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 835987 00:22:54.854 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:54.854 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 835987 ']' 00:22:54.854 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.854 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:54.854 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.854 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:54.854 19:13:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:55.113 [2024-07-25 19:13:47.354566] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:55.113 [2024-07-25 19:13:47.354607] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.113 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.113 [2024-07-25 19:13:47.424829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:55.113 [2024-07-25 19:13:47.502659] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.113 [2024-07-25 19:13:47.502694] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.113 [2024-07-25 19:13:47.502701] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.113 [2024-07-25 19:13:47.502708] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.113 [2024-07-25 19:13:47.502713] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.113 [2024-07-25 19:13:47.502820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.113 [2024-07-25 19:13:47.502945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:55.113 [2024-07-25 19:13:47.503051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.113 [2024-07-25 19:13:47.503051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:56.049 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:56.049 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:56.049 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:56.049 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:56.049 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:56.049 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:56.049 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:56.049 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.049 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:56.049 [2024-07-25 19:13:48.255674] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e2d0f0/0x1e315e0) succeed. 00:22:56.049 [2024-07-25 19:13:48.264928] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e2e730/0x1e72c80) succeed. 00:22:56.049 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.049 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:56.049 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:56.049 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:56.049 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:56.049 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:56.049 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.049 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:56.049 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.049 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:56.049 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.049 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:56.049 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.049 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:56.049 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.049 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:56.050 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.050 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:56.050 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.050 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:56.050 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.050 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:56.050 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.050 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:56.050 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.050 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:56.050 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:56.050 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.050 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:56.050 Malloc1 00:22:56.050 [2024-07-25 19:13:48.476586] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:56.050 Malloc2 00:22:56.308 Malloc3 00:22:56.308 Malloc4 00:22:56.308 Malloc5 00:22:56.308 Malloc6 00:22:56.308 Malloc7 00:22:56.308 Malloc8 00:22:56.568 Malloc9 00:22:56.568 Malloc10 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=836273 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 836273 /var/tmp/bdevperf.sock 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 836273 ']' 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:56.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:56.568 { 00:22:56.568 "params": { 00:22:56.568 "name": "Nvme$subsystem", 00:22:56.568 "trtype": "$TEST_TRANSPORT", 00:22:56.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:56.568 "adrfam": "ipv4", 00:22:56.568 "trsvcid": "$NVMF_PORT", 00:22:56.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:56.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:56.568 "hdgst": ${hdgst:-false}, 00:22:56.568 "ddgst": ${ddgst:-false} 00:22:56.568 }, 00:22:56.568 "method": "bdev_nvme_attach_controller" 00:22:56.568 } 00:22:56.568 EOF 00:22:56.568 )") 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:56.568 { 00:22:56.568 "params": { 00:22:56.568 "name": "Nvme$subsystem", 00:22:56.568 "trtype": "$TEST_TRANSPORT", 00:22:56.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:56.568 "adrfam": "ipv4", 00:22:56.568 "trsvcid": "$NVMF_PORT", 00:22:56.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:56.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:56.568 "hdgst": ${hdgst:-false}, 00:22:56.568 "ddgst": ${ddgst:-false} 00:22:56.568 }, 00:22:56.568 "method": "bdev_nvme_attach_controller" 00:22:56.568 } 00:22:56.568 EOF 00:22:56.568 )") 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:56.568 { 00:22:56.568 "params": { 00:22:56.568 "name": "Nvme$subsystem", 00:22:56.568 "trtype": "$TEST_TRANSPORT", 00:22:56.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:56.568 "adrfam": "ipv4", 00:22:56.568 "trsvcid": "$NVMF_PORT", 00:22:56.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:56.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:56.568 "hdgst": ${hdgst:-false}, 00:22:56.568 "ddgst": ${ddgst:-false} 00:22:56.568 }, 00:22:56.568 "method": "bdev_nvme_attach_controller" 00:22:56.568 } 00:22:56.568 EOF 00:22:56.568 )") 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:56.568 { 00:22:56.568 "params": { 00:22:56.568 "name": "Nvme$subsystem", 00:22:56.568 "trtype": "$TEST_TRANSPORT", 00:22:56.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:56.568 "adrfam": "ipv4", 00:22:56.568 "trsvcid": "$NVMF_PORT", 00:22:56.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:56.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:56.568 "hdgst": ${hdgst:-false}, 00:22:56.568 "ddgst": ${ddgst:-false} 00:22:56.568 }, 00:22:56.568 "method": "bdev_nvme_attach_controller" 00:22:56.568 } 00:22:56.568 EOF 00:22:56.568 )") 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:56.568 { 00:22:56.568 "params": { 00:22:56.568 "name": "Nvme$subsystem", 00:22:56.568 "trtype": "$TEST_TRANSPORT", 00:22:56.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:56.568 "adrfam": "ipv4", 00:22:56.568 "trsvcid": "$NVMF_PORT", 00:22:56.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:56.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:56.568 "hdgst": ${hdgst:-false}, 00:22:56.568 "ddgst": ${ddgst:-false} 00:22:56.568 }, 00:22:56.568 "method": "bdev_nvme_attach_controller" 00:22:56.568 } 00:22:56.568 EOF 00:22:56.568 )") 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:56.568 { 00:22:56.568 "params": { 00:22:56.568 "name": "Nvme$subsystem", 00:22:56.568 "trtype": "$TEST_TRANSPORT", 00:22:56.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:56.568 "adrfam": "ipv4", 00:22:56.568 "trsvcid": "$NVMF_PORT", 00:22:56.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:56.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:56.568 "hdgst": ${hdgst:-false}, 00:22:56.568 "ddgst": ${ddgst:-false} 00:22:56.568 }, 00:22:56.568 "method": "bdev_nvme_attach_controller" 00:22:56.568 } 00:22:56.568 EOF 00:22:56.568 )") 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:56.568 [2024-07-25 19:13:48.950844] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:56.568 [2024-07-25 19:13:48.950892] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid836273 ] 00:22:56.568 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:56.569 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:56.569 { 00:22:56.569 "params": { 00:22:56.569 "name": "Nvme$subsystem", 00:22:56.569 "trtype": "$TEST_TRANSPORT", 00:22:56.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:56.569 "adrfam": "ipv4", 00:22:56.569 "trsvcid": "$NVMF_PORT", 00:22:56.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:56.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:56.569 "hdgst": ${hdgst:-false}, 00:22:56.569 "ddgst": ${ddgst:-false} 00:22:56.569 }, 00:22:56.569 "method": "bdev_nvme_attach_controller" 00:22:56.569 } 00:22:56.569 EOF 00:22:56.569 )") 00:22:56.569 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:56.569 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:56.569 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:56.569 { 00:22:56.569 "params": { 00:22:56.569 "name": "Nvme$subsystem", 00:22:56.569 "trtype": "$TEST_TRANSPORT", 00:22:56.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:56.569 "adrfam": "ipv4", 00:22:56.569 "trsvcid": "$NVMF_PORT", 00:22:56.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:56.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:56.569 "hdgst": ${hdgst:-false}, 00:22:56.569 "ddgst": ${ddgst:-false} 00:22:56.569 }, 00:22:56.569 "method": "bdev_nvme_attach_controller" 00:22:56.569 } 00:22:56.569 EOF 00:22:56.569 )") 00:22:56.569 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:56.569 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:56.569 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:56.569 { 00:22:56.569 "params": { 00:22:56.569 "name": "Nvme$subsystem", 00:22:56.569 "trtype": "$TEST_TRANSPORT", 00:22:56.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:56.569 "adrfam": "ipv4", 00:22:56.569 "trsvcid": "$NVMF_PORT", 00:22:56.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:56.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:56.569 "hdgst": ${hdgst:-false}, 00:22:56.569 "ddgst": ${ddgst:-false} 00:22:56.569 }, 00:22:56.569 "method": "bdev_nvme_attach_controller" 00:22:56.569 } 00:22:56.569 EOF 00:22:56.569 )") 00:22:56.569 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:56.569 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:56.569 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:56.569 { 00:22:56.569 "params": { 00:22:56.569 "name": "Nvme$subsystem", 00:22:56.569 "trtype": "$TEST_TRANSPORT", 00:22:56.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:56.569 "adrfam": "ipv4", 00:22:56.569 "trsvcid": "$NVMF_PORT", 00:22:56.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:56.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:56.569 "hdgst": ${hdgst:-false}, 00:22:56.569 "ddgst": ${ddgst:-false} 00:22:56.569 }, 00:22:56.569 "method": "bdev_nvme_attach_controller" 00:22:56.569 } 00:22:56.569 EOF 00:22:56.569 )") 00:22:56.569 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:56.569 EAL: No free 2048 kB hugepages reported on node 1 00:22:56.569 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:22:56.569 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:22:56.569 19:13:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:56.569 "params": { 00:22:56.569 "name": "Nvme1", 00:22:56.569 "trtype": "rdma", 00:22:56.569 "traddr": "192.168.100.8", 00:22:56.569 "adrfam": "ipv4", 00:22:56.569 "trsvcid": "4420", 00:22:56.569 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:56.569 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:56.569 "hdgst": false, 00:22:56.569 "ddgst": false 00:22:56.569 }, 00:22:56.569 "method": "bdev_nvme_attach_controller" 00:22:56.569 },{ 00:22:56.569 "params": { 00:22:56.569 "name": "Nvme2", 00:22:56.569 "trtype": "rdma", 00:22:56.569 "traddr": "192.168.100.8", 00:22:56.569 "adrfam": "ipv4", 00:22:56.569 "trsvcid": "4420", 00:22:56.569 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:56.569 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:56.569 "hdgst": false, 00:22:56.569 "ddgst": false 00:22:56.569 }, 00:22:56.569 "method": "bdev_nvme_attach_controller" 00:22:56.569 },{ 00:22:56.569 "params": { 00:22:56.569 "name": "Nvme3", 00:22:56.569 "trtype": "rdma", 00:22:56.569 "traddr": "192.168.100.8", 00:22:56.569 "adrfam": "ipv4", 00:22:56.569 "trsvcid": "4420", 00:22:56.569 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:56.569 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:56.569 "hdgst": false, 00:22:56.569 "ddgst": false 00:22:56.569 }, 00:22:56.569 "method": "bdev_nvme_attach_controller" 00:22:56.569 },{ 00:22:56.569 "params": { 00:22:56.569 "name": "Nvme4", 00:22:56.569 "trtype": "rdma", 00:22:56.569 "traddr": "192.168.100.8", 00:22:56.569 "adrfam": "ipv4", 00:22:56.569 "trsvcid": "4420", 00:22:56.569 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:56.569 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:56.569 "hdgst": false, 00:22:56.569 "ddgst": false 00:22:56.569 }, 00:22:56.569 "method": "bdev_nvme_attach_controller" 00:22:56.569 },{ 00:22:56.569 "params": { 00:22:56.569 "name": "Nvme5", 00:22:56.569 "trtype": "rdma", 00:22:56.569 "traddr": "192.168.100.8", 00:22:56.569 "adrfam": "ipv4", 00:22:56.569 "trsvcid": "4420", 00:22:56.569 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:56.569 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:56.569 "hdgst": false, 00:22:56.569 "ddgst": false 00:22:56.569 }, 00:22:56.569 "method": "bdev_nvme_attach_controller" 00:22:56.569 },{ 00:22:56.569 "params": { 00:22:56.569 "name": "Nvme6", 00:22:56.569 "trtype": "rdma", 00:22:56.569 "traddr": "192.168.100.8", 00:22:56.569 "adrfam": "ipv4", 00:22:56.569 "trsvcid": "4420", 00:22:56.569 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:56.569 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:56.569 "hdgst": false, 00:22:56.569 "ddgst": false 00:22:56.569 }, 00:22:56.569 "method": "bdev_nvme_attach_controller" 00:22:56.569 },{ 00:22:56.569 "params": { 00:22:56.569 "name": "Nvme7", 00:22:56.569 "trtype": "rdma", 00:22:56.569 "traddr": "192.168.100.8", 00:22:56.569 "adrfam": "ipv4", 00:22:56.569 "trsvcid": "4420", 00:22:56.569 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:56.569 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:56.569 "hdgst": false, 00:22:56.569 "ddgst": false 00:22:56.569 }, 00:22:56.569 "method": "bdev_nvme_attach_controller" 00:22:56.569 },{ 00:22:56.569 "params": { 00:22:56.569 "name": "Nvme8", 00:22:56.569 "trtype": "rdma", 00:22:56.569 "traddr": "192.168.100.8", 00:22:56.569 "adrfam": "ipv4", 00:22:56.569 "trsvcid": "4420", 00:22:56.569 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:56.569 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:56.569 "hdgst": false, 00:22:56.569 "ddgst": false 00:22:56.569 }, 00:22:56.569 "method": "bdev_nvme_attach_controller" 00:22:56.569 },{ 00:22:56.569 "params": { 00:22:56.569 "name": "Nvme9", 00:22:56.569 "trtype": "rdma", 00:22:56.569 "traddr": "192.168.100.8", 00:22:56.569 "adrfam": "ipv4", 00:22:56.569 "trsvcid": "4420", 00:22:56.569 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:56.569 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:56.569 "hdgst": false, 00:22:56.569 "ddgst": false 00:22:56.569 }, 00:22:56.569 "method": "bdev_nvme_attach_controller" 00:22:56.569 },{ 00:22:56.569 "params": { 00:22:56.569 "name": "Nvme10", 00:22:56.569 "trtype": "rdma", 00:22:56.569 "traddr": "192.168.100.8", 00:22:56.569 "adrfam": "ipv4", 00:22:56.569 "trsvcid": "4420", 00:22:56.569 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:56.569 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:56.569 "hdgst": false, 00:22:56.569 "ddgst": false 00:22:56.570 }, 00:22:56.570 "method": "bdev_nvme_attach_controller" 00:22:56.570 }' 00:22:56.570 [2024-07-25 19:13:49.022278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.828 [2024-07-25 19:13:49.095083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.762 Running I/O for 10 seconds... 00:22:57.762 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:57.762 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:57.762 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:57.762 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.762 19:13:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:57.762 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.762 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:57.762 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:57.762 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:57.762 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:57.762 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:22:57.762 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:22:57.762 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:57.762 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:57.762 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:57.762 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:57.762 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.762 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:57.762 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.762 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:57.762 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:57.762 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:58.020 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:58.020 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:58.020 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:58.020 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:58.020 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.020 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:58.279 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.279 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:22:58.279 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:22:58.279 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:22:58.279 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:22:58.279 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:22:58.279 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 835987 00:22:58.279 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 835987 ']' 00:22:58.279 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 835987 00:22:58.279 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:22:58.279 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:58.279 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 835987 00:22:58.279 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:58.279 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:58.279 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 835987' 00:22:58.279 killing process with pid 835987 00:22:58.279 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 835987 00:22:58.279 19:13:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 835987 00:22:58.846 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:22:58.846 19:13:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:22:59.417 [2024-07-25 19:13:51.649278] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256900 was disconnected and freed. reset controller. 00:22:59.417 [2024-07-25 19:13:51.650735] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256680 was disconnected and freed. reset controller. 00:22:59.418 [2024-07-25 19:13:51.652442] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256400 was disconnected and freed. reset controller. 00:22:59.418 [2024-07-25 19:13:51.654043] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256180 was disconnected and freed. reset controller. 00:22:59.418 [2024-07-25 19:13:51.655637] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60ee80 was disconnected and freed. reset controller. 00:22:59.418 [2024-07-25 19:13:51.657379] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806bc0 was disconnected and freed. reset controller. 00:22:59.418 [2024-07-25 19:13:51.658967] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806940 was disconnected and freed. reset controller. 00:22:59.418 [2024-07-25 19:13:51.659069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad0f900 len:0x10000 key:0x189400 00:22:59.418 [2024-07-25 19:13:51.659101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.418 [2024-07-25 19:13:51.659139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acff880 len:0x10000 key:0x189400 00:22:59.418 [2024-07-25 19:13:51.659163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.418 [2024-07-25 19:13:51.659190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acef800 len:0x10000 key:0x189400 00:22:59.418 [2024-07-25 19:13:51.659213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.418 [2024-07-25 19:13:51.659240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acdf780 len:0x10000 key:0x189400 00:22:59.418 [2024-07-25 19:13:51.659261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.418 [2024-07-25 19:13:51.659289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001accf700 len:0x10000 key:0x189400 00:22:59.418 [2024-07-25 19:13:51.659310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.418 [2024-07-25 19:13:51.659336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acbf680 len:0x10000 key:0x189400 00:22:59.418 [2024-07-25 19:13:51.659358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.418 [2024-07-25 19:13:51.659385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acaf600 len:0x10000 key:0x189400 00:22:59.418 [2024-07-25 19:13:51.659406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.418 [2024-07-25 19:13:51.659433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac9f580 len:0x10000 key:0x189400 00:22:59.418 [2024-07-25 19:13:51.659454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.418 [2024-07-25 19:13:51.659480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac8f500 len:0x10000 key:0x189400 00:22:59.418 [2024-07-25 19:13:51.659501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.418 [2024-07-25 19:13:51.659528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac7f480 len:0x10000 key:0x189400 00:22:59.418 [2024-07-25 19:13:51.659557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.418 [2024-07-25 19:13:51.659585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac6f400 len:0x10000 key:0x189400 00:22:59.418 [2024-07-25 19:13:51.659605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.418 [2024-07-25 19:13:51.659632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac5f380 len:0x10000 key:0x189400 00:22:59.418 [2024-07-25 19:13:51.659653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.418 [2024-07-25 19:13:51.659680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac4f300 len:0x10000 key:0x189400 00:22:59.418 [2024-07-25 19:13:51.659701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.418 [2024-07-25 19:13:51.659727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac3f280 len:0x10000 key:0x189400 00:22:59.418 [2024-07-25 19:13:51.659749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.418 [2024-07-25 19:13:51.659775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac2f200 len:0x10000 key:0x189400 00:22:59.418 [2024-07-25 19:13:51.659797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.418 [2024-07-25 19:13:51.659823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac1f180 len:0x10000 key:0x189400 00:22:59.418 [2024-07-25 19:13:51.659845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.418 [2024-07-25 19:13:51.659871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac0f100 len:0x10000 key:0x189400 00:22:59.418 [2024-07-25 19:13:51.659892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.418 [2024-07-25 19:13:51.659932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aff0000 len:0x10000 key:0x189c00 00:22:59.418 [2024-07-25 19:13:51.659955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.418 [2024-07-25 19:13:51.659981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afdff80 len:0x10000 key:0x189c00 00:22:59.418 [2024-07-25 19:13:51.660002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.418 [2024-07-25 19:13:51.660030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afcff00 len:0x10000 key:0x189c00 00:22:59.418 [2024-07-25 19:13:51.660050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.418 [2024-07-25 19:13:51.660077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afbfe80 len:0x10000 key:0x189c00 00:22:59.418 [2024-07-25 19:13:51.660097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.418 [2024-07-25 19:13:51.660129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afafe00 len:0x10000 key:0x189c00 00:22:59.418 [2024-07-25 19:13:51.660151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.418 [2024-07-25 19:13:51.660178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af9fd80 len:0x10000 key:0x189c00 00:22:59.418 [2024-07-25 19:13:51.660199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.418 [2024-07-25 19:13:51.660226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af8fd00 len:0x10000 key:0x189c00 00:22:59.418 [2024-07-25 19:13:51.660247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.418 [2024-07-25 19:13:51.660274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af7fc80 len:0x10000 key:0x189c00 00:22:59.418 [2024-07-25 19:13:51.660294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.418 [2024-07-25 19:13:51.660321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af6fc00 len:0x10000 key:0x189c00 00:22:59.418 [2024-07-25 19:13:51.660342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.418 [2024-07-25 19:13:51.660369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af5fb80 len:0x10000 key:0x189c00 00:22:59.418 [2024-07-25 19:13:51.660390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.418 [2024-07-25 19:13:51.660417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af4fb00 len:0x10000 key:0x189c00 00:22:59.418 [2024-07-25 19:13:51.660437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.418 [2024-07-25 19:13:51.660464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af3fa80 len:0x10000 key:0x189c00 00:22:59.418 [2024-07-25 19:13:51.660485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.418 [2024-07-25 19:13:51.660511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af2fa00 len:0x10000 key:0x189c00 00:22:59.418 [2024-07-25 19:13:51.660533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.418 [2024-07-25 19:13:51.660559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af1f980 len:0x10000 key:0x189c00 00:22:59.418 [2024-07-25 19:13:51.660580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.418 [2024-07-25 19:13:51.660607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaefe00 len:0x10000 key:0x189a00 00:22:59.418 [2024-07-25 19:13:51.660628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.418 [2024-07-25 19:13:51.660659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000110d0000 len:0x10000 key:0x18a300 00:22:59.418 [2024-07-25 19:13:51.660680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.418 [2024-07-25 19:13:51.660707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000110f1000 len:0x10000 key:0x18a300 00:22:59.419 [2024-07-25 19:13:51.660728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.660755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011112000 len:0x10000 key:0x18a300 00:22:59.419 [2024-07-25 19:13:51.660776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.660803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011133000 len:0x10000 key:0x18a300 00:22:59.419 [2024-07-25 19:13:51.660823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.660850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011154000 len:0x10000 key:0x18a300 00:22:59.419 [2024-07-25 19:13:51.660871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.660898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011175000 len:0x10000 key:0x18a300 00:22:59.419 [2024-07-25 19:13:51.660928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.660954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011196000 len:0x10000 key:0x18a300 00:22:59.419 [2024-07-25 19:13:51.660976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.661004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000111b7000 len:0x10000 key:0x18a300 00:22:59.419 [2024-07-25 19:13:51.661025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.661052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000111d8000 len:0x10000 key:0x18a300 00:22:59.419 [2024-07-25 19:13:51.661073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.661100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000111f9000 len:0x10000 key:0x18a300 00:22:59.419 [2024-07-25 19:13:51.661121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.661148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001121a000 len:0x10000 key:0x18a300 00:22:59.419 [2024-07-25 19:13:51.661170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.661196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001123b000 len:0x10000 key:0x18a300 00:22:59.419 [2024-07-25 19:13:51.661221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.661247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001125c000 len:0x10000 key:0x18a300 00:22:59.419 [2024-07-25 19:13:51.661269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.661295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001127d000 len:0x10000 key:0x18a300 00:22:59.419 [2024-07-25 19:13:51.661316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.661343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001129e000 len:0x10000 key:0x18a300 00:22:59.419 [2024-07-25 19:13:51.661364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.661391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000112bf000 len:0x10000 key:0x18a300 00:22:59.419 [2024-07-25 19:13:51.661412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.661439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c189000 len:0x10000 key:0x18a300 00:22:59.419 [2024-07-25 19:13:51.661460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.661486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c168000 len:0x10000 key:0x18a300 00:22:59.419 [2024-07-25 19:13:51.661507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.661534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c147000 len:0x10000 key:0x18a300 00:22:59.419 [2024-07-25 19:13:51.661555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.661582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c126000 len:0x10000 key:0x18a300 00:22:59.419 [2024-07-25 19:13:51.661603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.661629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c105000 len:0x10000 key:0x18a300 00:22:59.419 [2024-07-25 19:13:51.661651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.661677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c0e4000 len:0x10000 key:0x18a300 00:22:59.419 [2024-07-25 19:13:51.661699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.661725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c0c3000 len:0x10000 key:0x18a300 00:22:59.419 [2024-07-25 19:13:51.661757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.661784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c0a2000 len:0x10000 key:0x18a300 00:22:59.419 [2024-07-25 19:13:51.661805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.661832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c081000 len:0x10000 key:0x18a300 00:22:59.419 [2024-07-25 19:13:51.661854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.661880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c060000 len:0x10000 key:0x18a300 00:22:59.419 [2024-07-25 19:13:51.661911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.661945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c45f000 len:0x10000 key:0x18a300 00:22:59.419 [2024-07-25 19:13:51.661969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.661997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c43e000 len:0x10000 key:0x18a300 00:22:59.419 [2024-07-25 19:13:51.662019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.662045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c41d000 len:0x10000 key:0x18a300 00:22:59.419 [2024-07-25 19:13:51.662067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.662094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3fc000 len:0x10000 key:0x18a300 00:22:59.419 [2024-07-25 19:13:51.662115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.662142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3db000 len:0x10000 key:0x18a300 00:22:59.419 [2024-07-25 19:13:51.662163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.662191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3ba000 len:0x10000 key:0x18a300 00:22:59.419 [2024-07-25 19:13:51.662212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.664120] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8066c0 was disconnected and freed. reset controller. 00:22:59.419 [2024-07-25 19:13:51.664167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1dff80 len:0x10000 key:0x189600 00:22:59.419 [2024-07-25 19:13:51.664191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.664224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1cff00 len:0x10000 key:0x189600 00:22:59.419 [2024-07-25 19:13:51.664253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.664281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1bfe80 len:0x10000 key:0x189600 00:22:59.419 [2024-07-25 19:13:51.664302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.664330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1afe00 len:0x10000 key:0x189600 00:22:59.419 [2024-07-25 19:13:51.664352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.419 [2024-07-25 19:13:51.664379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b19fd80 len:0x10000 key:0x189600 00:22:59.419 [2024-07-25 19:13:51.664401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.664427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b18fd00 len:0x10000 key:0x189600 00:22:59.420 [2024-07-25 19:13:51.664448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.664475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b17fc80 len:0x10000 key:0x189600 00:22:59.420 [2024-07-25 19:13:51.664497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.664524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b16fc00 len:0x10000 key:0x189600 00:22:59.420 [2024-07-25 19:13:51.664545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.664572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b15fb80 len:0x10000 key:0x189600 00:22:59.420 [2024-07-25 19:13:51.664592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.664619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b14fb00 len:0x10000 key:0x189600 00:22:59.420 [2024-07-25 19:13:51.664640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.664667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b13fa80 len:0x10000 key:0x189600 00:22:59.420 [2024-07-25 19:13:51.664688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.664714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b12fa00 len:0x10000 key:0x189600 00:22:59.420 [2024-07-25 19:13:51.664736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.664763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b11f980 len:0x10000 key:0x189600 00:22:59.420 [2024-07-25 19:13:51.664789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.664815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b10f900 len:0x10000 key:0x189600 00:22:59.420 [2024-07-25 19:13:51.664837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.664864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ff880 len:0x10000 key:0x189600 00:22:59.420 [2024-07-25 19:13:51.664886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.664922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ef800 len:0x10000 key:0x189600 00:22:59.420 [2024-07-25 19:13:51.664944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.664971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0df780 len:0x10000 key:0x189600 00:22:59.420 [2024-07-25 19:13:51.664993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.665019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0cf700 len:0x10000 key:0x189600 00:22:59.420 [2024-07-25 19:13:51.665043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.665070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0bf680 len:0x10000 key:0x189600 00:22:59.420 [2024-07-25 19:13:51.665091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.665118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0af600 len:0x10000 key:0x189600 00:22:59.420 [2024-07-25 19:13:51.665139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.665166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09f580 len:0x10000 key:0x189600 00:22:59.420 [2024-07-25 19:13:51.665187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.665213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b08f500 len:0x10000 key:0x189600 00:22:59.420 [2024-07-25 19:13:51.665236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.665264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b07f480 len:0x10000 key:0x189600 00:22:59.420 [2024-07-25 19:13:51.665284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.665311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b06f400 len:0x10000 key:0x189600 00:22:59.420 [2024-07-25 19:13:51.665333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.665363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05f380 len:0x10000 key:0x189600 00:22:59.420 [2024-07-25 19:13:51.665384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.665411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b04f300 len:0x10000 key:0x189600 00:22:59.420 [2024-07-25 19:13:51.665433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.665460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b03f280 len:0x10000 key:0x189600 00:22:59.420 [2024-07-25 19:13:51.665480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.665511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f200 len:0x10000 key:0x189600 00:22:59.420 [2024-07-25 19:13:51.665533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.665560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b01f180 len:0x10000 key:0x189600 00:22:59.420 [2024-07-25 19:13:51.665581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.665607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b00f100 len:0x10000 key:0x189600 00:22:59.420 [2024-07-25 19:13:51.665633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.665660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3f0000 len:0x10000 key:0x18a000 00:22:59.420 [2024-07-25 19:13:51.665681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.665707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3dff80 len:0x10000 key:0x18a000 00:22:59.420 [2024-07-25 19:13:51.665729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.665756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3cff00 len:0x10000 key:0x18a000 00:22:59.420 [2024-07-25 19:13:51.665777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.665804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfe80 len:0x10000 key:0x18a000 00:22:59.420 [2024-07-25 19:13:51.665826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.665853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3afe00 len:0x10000 key:0x18a000 00:22:59.420 [2024-07-25 19:13:51.665873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.665983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b39fd80 len:0x10000 key:0x18a000 00:22:59.420 [2024-07-25 19:13:51.666008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.666035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fd00 len:0x10000 key:0x18a000 00:22:59.420 [2024-07-25 19:13:51.666056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.666083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b37fc80 len:0x10000 key:0x18a000 00:22:59.420 [2024-07-25 19:13:51.666105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.666131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b36fc00 len:0x10000 key:0x18a000 00:22:59.420 [2024-07-25 19:13:51.666153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.666179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35fb80 len:0x10000 key:0x18a000 00:22:59.420 [2024-07-25 19:13:51.666201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.420 [2024-07-25 19:13:51.666227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b34fb00 len:0x10000 key:0x18a000 00:22:59.420 [2024-07-25 19:13:51.666248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.666275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b33fa80 len:0x10000 key:0x18a000 00:22:59.421 [2024-07-25 19:13:51.666297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.666324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b32fa00 len:0x10000 key:0x18a000 00:22:59.421 [2024-07-25 19:13:51.666345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.666371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b31f980 len:0x10000 key:0x18a000 00:22:59.421 [2024-07-25 19:13:51.666393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.666420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b30f900 len:0x10000 key:0x18a000 00:22:59.421 [2024-07-25 19:13:51.666441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.666468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ff880 len:0x10000 key:0x18a000 00:22:59.421 [2024-07-25 19:13:51.666489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.666516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ef800 len:0x10000 key:0x18a000 00:22:59.421 [2024-07-25 19:13:51.666542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.666569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2df780 len:0x10000 key:0x18a000 00:22:59.421 [2024-07-25 19:13:51.666600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.666637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf700 len:0x10000 key:0x18a000 00:22:59.421 [2024-07-25 19:13:51.666659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.666685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2bf680 len:0x10000 key:0x18a000 00:22:59.421 [2024-07-25 19:13:51.666707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.666734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2af600 len:0x10000 key:0x18a000 00:22:59.421 [2024-07-25 19:13:51.666755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.666782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b29f580 len:0x10000 key:0x18a000 00:22:59.421 [2024-07-25 19:13:51.666803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.666849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b28f500 len:0x10000 key:0x18a000 00:22:59.421 [2024-07-25 19:13:51.666871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.666898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b27f480 len:0x10000 key:0x18a000 00:22:59.421 [2024-07-25 19:13:51.666933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.666961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26f400 len:0x10000 key:0x18a000 00:22:59.421 [2024-07-25 19:13:51.666981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.667008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b25f380 len:0x10000 key:0x18a000 00:22:59.421 [2024-07-25 19:13:51.667029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.667055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24f300 len:0x10000 key:0x18a000 00:22:59.421 [2024-07-25 19:13:51.667076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.667103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23f280 len:0x10000 key:0x18a000 00:22:59.421 [2024-07-25 19:13:51.667129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.667156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b22f200 len:0x10000 key:0x18a000 00:22:59.421 [2024-07-25 19:13:51.667177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.667207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b21f180 len:0x10000 key:0x18a000 00:22:59.421 [2024-07-25 19:13:51.667229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.667255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b20f100 len:0x10000 key:0x18a000 00:22:59.421 [2024-07-25 19:13:51.667277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.667303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5f0000 len:0x10000 key:0x189200 00:22:59.421 [2024-07-25 19:13:51.667326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.667354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae0f700 len:0x10000 key:0x189c00 00:22:59.421 [2024-07-25 19:13:51.667375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.667401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001023f000 len:0x10000 key:0x18a300 00:22:59.421 [2024-07-25 19:13:51.667423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.669320] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806440 was disconnected and freed. reset controller. 00:22:59.421 [2024-07-25 19:13:51.669370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4cfd00 len:0x10000 key:0x189200 00:22:59.421 [2024-07-25 19:13:51.669393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.669426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4bfc80 len:0x10000 key:0x189200 00:22:59.421 [2024-07-25 19:13:51.669448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.669475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4afc00 len:0x10000 key:0x189200 00:22:59.421 [2024-07-25 19:13:51.669497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.669524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b49fb80 len:0x10000 key:0x189200 00:22:59.421 [2024-07-25 19:13:51.669545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.669571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b48fb00 len:0x10000 key:0x189200 00:22:59.421 [2024-07-25 19:13:51.669600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.669627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b47fa80 len:0x10000 key:0x189200 00:22:59.421 [2024-07-25 19:13:51.669649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.669675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b46fa00 len:0x10000 key:0x189200 00:22:59.421 [2024-07-25 19:13:51.669697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.669723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b45f980 len:0x10000 key:0x189200 00:22:59.421 [2024-07-25 19:13:51.669745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.669771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b44f900 len:0x10000 key:0x189200 00:22:59.421 [2024-07-25 19:13:51.669792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.669819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b43f880 len:0x10000 key:0x189200 00:22:59.421 [2024-07-25 19:13:51.669840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.669867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b42f800 len:0x10000 key:0x189200 00:22:59.421 [2024-07-25 19:13:51.669888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.669989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b41f780 len:0x10000 key:0x189200 00:22:59.421 [2024-07-25 19:13:51.670012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.421 [2024-07-25 19:13:51.670039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b40f700 len:0x10000 key:0x189200 00:22:59.421 [2024-07-25 19:13:51.670060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.422 [2024-07-25 19:13:51.670087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7f0000 len:0x10000 key:0x189f00 00:22:59.422 [2024-07-25 19:13:51.670108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.422 [2024-07-25 19:13:51.670136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7dff80 len:0x10000 key:0x189f00 00:22:59.422 [2024-07-25 19:13:51.670158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.422 [2024-07-25 19:13:51.670184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7cff00 len:0x10000 key:0x189f00 00:22:59.422 [2024-07-25 19:13:51.670211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.422 [2024-07-25 19:13:51.670238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7bfe80 len:0x10000 key:0x189f00 00:22:59.422 [2024-07-25 19:13:51.670260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.422 [2024-07-25 19:13:51.670286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7afe00 len:0x10000 key:0x189f00 00:22:59.422 [2024-07-25 19:13:51.670307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.422 [2024-07-25 19:13:51.670334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b79fd80 len:0x10000 key:0x189f00 00:22:59.422 [2024-07-25 19:13:51.670355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.422 [2024-07-25 19:13:51.670381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b78fd00 len:0x10000 key:0x189f00 00:22:59.422 [2024-07-25 19:13:51.670402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.422 [2024-07-25 19:13:51.670429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b77fc80 len:0x10000 key:0x189f00 00:22:59.422 [2024-07-25 19:13:51.670450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.422 [2024-07-25 19:13:51.670477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b76fc00 len:0x10000 key:0x189f00 00:22:59.422 [2024-07-25 19:13:51.670498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.422 [2024-07-25 19:13:51.670525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b75fb80 len:0x10000 key:0x189f00 00:22:59.422 [2024-07-25 19:13:51.670546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.422 [2024-07-25 19:13:51.670573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b74fb00 len:0x10000 key:0x189f00 00:22:59.422 [2024-07-25 19:13:51.670593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.422 [2024-07-25 19:13:51.670620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b73fa80 len:0x10000 key:0x189f00 00:22:59.422 [2024-07-25 19:13:51.670642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.422 [2024-07-25 19:13:51.670669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b72fa00 len:0x10000 key:0x189f00 00:22:59.422 [2024-07-25 19:13:51.670689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.422 [2024-07-25 19:13:51.670720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b71f980 len:0x10000 key:0x189f00 00:22:59.422 [2024-07-25 19:13:51.670742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.422 [2024-07-25 19:13:51.670772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b70f900 len:0x10000 key:0x189f00 00:22:59.422 [2024-07-25 19:13:51.670794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.422 [2024-07-25 19:13:51.670820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ff880 len:0x10000 key:0x189f00 00:22:59.422 [2024-07-25 19:13:51.670842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.422 [2024-07-25 19:13:51.670869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ef800 len:0x10000 key:0x189f00 00:22:59.422 [2024-07-25 19:13:51.670890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.422 [2024-07-25 19:13:51.670925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6df780 len:0x10000 key:0x189f00 00:22:59.422 [2024-07-25 19:13:51.670947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.422 [2024-07-25 19:13:51.670973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6cf700 len:0x10000 key:0x189f00 00:22:59.422 [2024-07-25 19:13:51.670995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.422 [2024-07-25 19:13:51.671021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6bf680 len:0x10000 key:0x189f00 00:22:59.422 [2024-07-25 19:13:51.671043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.422 [2024-07-25 19:13:51.671070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6af600 len:0x10000 key:0x189f00 00:22:59.422 [2024-07-25 19:13:51.671091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.422 [2024-07-25 19:13:51.671118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b69f580 len:0x10000 key:0x189f00 00:22:59.422 [2024-07-25 19:13:51.671139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.422 [2024-07-25 19:13:51.671166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b68f500 len:0x10000 key:0x189f00 00:22:59.422 [2024-07-25 19:13:51.671187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.422 [2024-07-25 19:13:51.671214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b67f480 len:0x10000 key:0x189f00 00:22:59.422 [2024-07-25 19:13:51.671234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.422 [2024-07-25 19:13:51.671261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b66f400 len:0x10000 key:0x189f00 00:22:59.422 [2024-07-25 19:13:51.671282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.422 [2024-07-25 19:13:51.671313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b65f380 len:0x10000 key:0x189f00 00:22:59.422 [2024-07-25 19:13:51.671334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.422 [2024-07-25 19:13:51.671361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b64f300 len:0x10000 key:0x189f00 00:22:59.422 [2024-07-25 19:13:51.671382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.422 [2024-07-25 19:13:51.671408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b63f280 len:0x10000 key:0x189f00 00:22:59.422 [2024-07-25 19:13:51.671429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.422 [2024-07-25 19:13:51.671456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b62f200 len:0x10000 key:0x189f00 00:22:59.422 [2024-07-25 19:13:51.671477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.422 [2024-07-25 19:13:51.671504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b61f180 len:0x10000 key:0x189f00 00:22:59.422 [2024-07-25 19:13:51.671525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.423 [2024-07-25 19:13:51.671551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b60f100 len:0x10000 key:0x189f00 00:22:59.423 [2024-07-25 19:13:51.671572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.423 [2024-07-25 19:13:51.671599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9f0000 len:0x10000 key:0x18a400 00:22:59.423 [2024-07-25 19:13:51.671620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.423 [2024-07-25 19:13:51.671646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9dff80 len:0x10000 key:0x18a400 00:22:59.423 [2024-07-25 19:13:51.671667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.423 [2024-07-25 19:13:51.671694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9cff00 len:0x10000 key:0x18a400 00:22:59.423 [2024-07-25 19:13:51.671715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.423 [2024-07-25 19:13:51.671741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9bfe80 len:0x10000 key:0x18a400 00:22:59.423 [2024-07-25 19:13:51.671762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.423 [2024-07-25 19:13:51.671789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9afe00 len:0x10000 key:0x18a400 00:22:59.423 [2024-07-25 19:13:51.671810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.423 [2024-07-25 19:13:51.671836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fd80 len:0x10000 key:0x18a400 00:22:59.423 [2024-07-25 19:13:51.671861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.423 [2024-07-25 19:13:51.671888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b98fd00 len:0x10000 key:0x18a400 00:22:59.423 [2024-07-25 19:13:51.671919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.423 [2024-07-25 19:13:51.671947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b97fc80 len:0x10000 key:0x18a400 00:22:59.423 [2024-07-25 19:13:51.671967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.423 [2024-07-25 19:13:51.671995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b96fc00 len:0x10000 key:0x18a400 00:22:59.423 [2024-07-25 19:13:51.672015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.423 [2024-07-25 19:13:51.672042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b95fb80 len:0x10000 key:0x18a400 00:22:59.423 [2024-07-25 19:13:51.672063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.423 [2024-07-25 19:13:51.672090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b94fb00 len:0x10000 key:0x18a400 00:22:59.423 [2024-07-25 19:13:51.672110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.423 [2024-07-25 19:13:51.672137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93fa80 len:0x10000 key:0x18a400 00:22:59.423 [2024-07-25 19:13:51.672158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.423 [2024-07-25 19:13:51.672185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92fa00 len:0x10000 key:0x18a400 00:22:59.423 [2024-07-25 19:13:51.672206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.423 [2024-07-25 19:13:51.672233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b91f980 len:0x10000 key:0x18a400 00:22:59.423 [2024-07-25 19:13:51.672254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.423 [2024-07-25 19:13:51.672283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b90f900 len:0x10000 key:0x18a400 00:22:59.423 [2024-07-25 19:13:51.672304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.423 [2024-07-25 19:13:51.672330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ff880 len:0x10000 key:0x18a400 00:22:59.423 [2024-07-25 19:13:51.672352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.423 [2024-07-25 19:13:51.672379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ef800 len:0x10000 key:0x18a400 00:22:59.423 [2024-07-25 19:13:51.672403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.423 [2024-07-25 19:13:51.672430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8df780 len:0x10000 key:0x18a400 00:22:59.423 [2024-07-25 19:13:51.672452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.423 [2024-07-25 19:13:51.672478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8cf700 len:0x10000 key:0x18a400 00:22:59.423 [2024-07-25 19:13:51.672500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.423 [2024-07-25 19:13:51.672526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4dfd80 len:0x10000 key:0x189200 00:22:59.423 [2024-07-25 19:13:51.672548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:4448f000 sqhd:52b0 p:0 m:0 dnr:0 00:22:59.423 [2024-07-25 19:13:51.675088] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8061c0 was disconnected and freed. reset controller. 00:22:59.423 [2024-07-25 19:13:51.675228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.423 [2024-07-25 19:13:51.675252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.423 [2024-07-25 19:13:51.675275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.423 [2024-07-25 19:13:51.675294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.423 [2024-07-25 19:13:51.675315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.423 [2024-07-25 19:13:51.675334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.423 [2024-07-25 19:13:51.675354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.423 [2024-07-25 19:13:51.675373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.423 [2024-07-25 19:13:51.677211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:59.423 [2024-07-25 19:13:51.677242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:59.423 [2024-07-25 19:13:51.677262] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.423 [2024-07-25 19:13:51.677294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.423 [2024-07-25 19:13:51.677316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.423 [2024-07-25 19:13:51.677337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.423 [2024-07-25 19:13:51.677356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.423 [2024-07-25 19:13:51.677377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.423 [2024-07-25 19:13:51.677396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.423 [2024-07-25 19:13:51.677422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.423 [2024-07-25 19:13:51.677441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.423 [2024-07-25 19:13:51.679120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:59.423 [2024-07-25 19:13:51.679151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:59.423 [2024-07-25 19:13:51.679168] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.423 [2024-07-25 19:13:51.679203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.423 [2024-07-25 19:13:51.679224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.423 [2024-07-25 19:13:51.679244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.423 [2024-07-25 19:13:51.679262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.423 [2024-07-25 19:13:51.679283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.423 [2024-07-25 19:13:51.679302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.423 [2024-07-25 19:13:51.679322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.423 [2024-07-25 19:13:51.679341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.423 [2024-07-25 19:13:51.680660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:59.423 [2024-07-25 19:13:51.680691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:59.423 [2024-07-25 19:13:51.680711] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.424 [2024-07-25 19:13:51.680745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.424 [2024-07-25 19:13:51.680768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.424 [2024-07-25 19:13:51.680790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.424 [2024-07-25 19:13:51.680811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.424 [2024-07-25 19:13:51.680833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.424 [2024-07-25 19:13:51.680853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.424 [2024-07-25 19:13:51.680875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.424 [2024-07-25 19:13:51.680896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.424 [2024-07-25 19:13:51.682450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:59.424 [2024-07-25 19:13:51.682482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:59.424 [2024-07-25 19:13:51.682507] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.424 [2024-07-25 19:13:51.682543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.424 [2024-07-25 19:13:51.682565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.424 [2024-07-25 19:13:51.682587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.424 [2024-07-25 19:13:51.682619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.424 [2024-07-25 19:13:51.682639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.424 [2024-07-25 19:13:51.682657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.424 [2024-07-25 19:13:51.682678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.424 [2024-07-25 19:13:51.682696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.424 [2024-07-25 19:13:51.684500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:59.424 [2024-07-25 19:13:51.684530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:59.424 [2024-07-25 19:13:51.684548] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.424 [2024-07-25 19:13:51.684582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.424 [2024-07-25 19:13:51.684605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.424 [2024-07-25 19:13:51.684627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.424 [2024-07-25 19:13:51.684648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.424 [2024-07-25 19:13:51.684670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.424 [2024-07-25 19:13:51.684691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.424 [2024-07-25 19:13:51.684713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.424 [2024-07-25 19:13:51.684733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.424 [2024-07-25 19:13:51.686088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:59.424 [2024-07-25 19:13:51.686120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:59.424 [2024-07-25 19:13:51.686140] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.424 [2024-07-25 19:13:51.686176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.424 [2024-07-25 19:13:51.686199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.424 [2024-07-25 19:13:51.686221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.424 [2024-07-25 19:13:51.686249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.424 [2024-07-25 19:13:51.686272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.424 [2024-07-25 19:13:51.686292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.424 [2024-07-25 19:13:51.686314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.424 [2024-07-25 19:13:51.686334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.424 [2024-07-25 19:13:51.687622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:59.424 [2024-07-25 19:13:51.687653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:59.424 [2024-07-25 19:13:51.687671] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.424 [2024-07-25 19:13:51.687706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.424 [2024-07-25 19:13:51.687729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.424 [2024-07-25 19:13:51.687751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.424 [2024-07-25 19:13:51.687771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.424 [2024-07-25 19:13:51.687793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.424 [2024-07-25 19:13:51.687813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.424 [2024-07-25 19:13:51.687835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.424 [2024-07-25 19:13:51.687856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.424 [2024-07-25 19:13:51.689401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:59.424 [2024-07-25 19:13:51.689431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:59.424 [2024-07-25 19:13:51.689450] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.424 [2024-07-25 19:13:51.689485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.424 [2024-07-25 19:13:51.689507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.424 [2024-07-25 19:13:51.689530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.424 [2024-07-25 19:13:51.689550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.424 [2024-07-25 19:13:51.689571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.424 [2024-07-25 19:13:51.689592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.424 [2024-07-25 19:13:51.689614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.424 [2024-07-25 19:13:51.689641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.424 [2024-07-25 19:13:51.690944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:59.424 [2024-07-25 19:13:51.690975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:59.424 [2024-07-25 19:13:51.690995] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.424 [2024-07-25 19:13:51.691030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.424 [2024-07-25 19:13:51.691051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.424 [2024-07-25 19:13:51.691073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.424 [2024-07-25 19:13:51.691093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.424 [2024-07-25 19:13:51.691116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.424 [2024-07-25 19:13:51.691137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.424 [2024-07-25 19:13:51.691159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.424 [2024-07-25 19:13:51.691179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:60971 cdw0:0 sqhd:0900 p:0 m:0 dnr:0 00:22:59.424 [2024-07-25 19:13:51.723167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:59.424 [2024-07-25 19:13:51.723222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:59.424 [2024-07-25 19:13:51.723231] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.424 [2024-07-25 19:13:51.733049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:59.424 [2024-07-25 19:13:51.733079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:59.424 [2024-07-25 19:13:51.733088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:59.424 [2024-07-25 19:13:51.733126] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.425 [2024-07-25 19:13:51.733137] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.425 [2024-07-25 19:13:51.733148] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.425 [2024-07-25 19:13:51.733158] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.425 [2024-07-25 19:13:51.733169] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.425 [2024-07-25 19:13:51.733180] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.425 [2024-07-25 19:13:51.733189] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.425 [2024-07-25 19:13:51.733273] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:59.425 [2024-07-25 19:13:51.733284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:59.425 [2024-07-25 19:13:51.733292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:59.425 [2024-07-25 19:13:51.733304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:59.425 [2024-07-25 19:13:51.735554] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:59.425 task offset: 32768 on job bdev=Nvme7n1 fails 00:22:59.425 00:22:59.425 Latency(us) 00:22:59.425 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.425 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:59.425 Job: Nvme1n1 ended in about 1.76 seconds with error 00:22:59.425 Verification LBA range: start 0x0 length 0x400 00:22:59.425 Nvme1n1 : 1.76 126.48 7.90 36.30 0.00 390285.69 7579.38 1072282.94 00:22:59.425 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:59.425 Job: Nvme2n1 ended in about 1.76 seconds with error 00:22:59.425 Verification LBA range: start 0x0 length 0x400 00:22:59.425 Nvme2n1 : 1.76 117.89 7.37 36.27 0.00 407957.41 11796.48 1072282.94 00:22:59.425 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:59.425 Job: Nvme3n1 ended in about 1.77 seconds with error 00:22:59.425 Verification LBA range: start 0x0 length 0x400 00:22:59.425 Nvme3n1 : 1.77 127.44 7.96 36.25 0.00 380673.01 4530.53 1072282.94 00:22:59.425 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:59.425 Job: Nvme4n1 ended in about 1.77 seconds with error 00:22:59.425 Verification LBA range: start 0x0 length 0x400 00:22:59.425 Nvme4n1 : 1.77 126.80 7.93 36.23 0.00 378741.49 24048.86 1072282.94 00:22:59.425 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:59.425 Job: Nvme5n1 ended in about 1.77 seconds with error 00:22:59.425 Verification LBA range: start 0x0 length 0x400 00:22:59.425 Nvme5n1 : 1.77 117.68 7.36 36.21 0.00 397779.22 30089.57 1072282.94 00:22:59.425 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:59.425 Job: Nvme6n1 ended in about 1.77 seconds with error 00:22:59.425 Verification LBA range: start 0x0 length 0x400 00:22:59.425 Nvme6n1 : 1.77 126.67 7.92 36.19 0.00 372425.73 34420.65 1079577.38 00:22:59.425 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:59.425 Job: Nvme7n1 ended in about 1.77 seconds with error 00:22:59.425 Verification LBA range: start 0x0 length 0x400 00:22:59.425 Nvme7n1 : 1.77 122.65 7.67 36.17 0.00 377598.90 44906.41 1072282.94 00:22:59.425 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:59.425 Job: Nvme8n1 ended in about 1.77 seconds with error 00:22:59.425 Verification LBA range: start 0x0 length 0x400 00:22:59.425 Nvme8n1 : 1.77 108.47 6.78 36.16 0.00 410874.66 52428.80 1130638.47 00:22:59.425 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:59.425 Job: Nvme9n1 ended in about 1.77 seconds with error 00:22:59.425 Verification LBA range: start 0x0 length 0x400 00:22:59.425 Nvme9n1 : 1.77 108.41 6.78 36.14 0.00 407287.10 47413.87 1123344.03 00:22:59.425 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:59.425 Job: Nvme10n1 ended in about 1.77 seconds with error 00:22:59.425 Verification LBA range: start 0x0 length 0x400 00:22:59.425 Nvme10n1 : 1.77 72.24 4.51 36.12 0.00 537786.99 59267.34 1108755.14 00:22:59.425 =================================================================================================================== 00:22:59.425 Total : 1154.73 72.17 362.04 0.00 401466.92 4530.53 1130638.47 00:22:59.425 [2024-07-25 19:13:51.756575] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:59.425 [2024-07-25 19:13:51.756597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:59.425 [2024-07-25 19:13:51.756609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:59.425 [2024-07-25 19:13:51.765379] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:59.425 [2024-07-25 19:13:51.765438] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:59.425 [2024-07-25 19:13:51.765459] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:22:59.425 [2024-07-25 19:13:51.765548] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:59.425 [2024-07-25 19:13:51.765573] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:59.425 [2024-07-25 19:13:51.765590] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e5300 00:22:59.425 [2024-07-25 19:13:51.765669] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:59.425 [2024-07-25 19:13:51.765693] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:59.425 [2024-07-25 19:13:51.765709] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d9c80 00:22:59.425 [2024-07-25 19:13:51.769066] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:59.425 [2024-07-25 19:13:51.769108] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:59.425 [2024-07-25 19:13:51.769127] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d2900 00:22:59.425 [2024-07-25 19:13:51.769226] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:59.425 [2024-07-25 19:13:51.769251] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:59.425 [2024-07-25 19:13:51.769267] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c6340 00:22:59.425 [2024-07-25 19:13:51.769351] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:59.425 [2024-07-25 19:13:51.769376] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:59.425 [2024-07-25 19:13:51.769392] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c5040 00:22:59.425 [2024-07-25 19:13:51.769489] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:59.425 [2024-07-25 19:13:51.769513] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:59.425 [2024-07-25 19:13:51.769529] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192a8500 00:22:59.425 [2024-07-25 19:13:51.770165] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:59.425 [2024-07-25 19:13:51.770195] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:59.425 [2024-07-25 19:13:51.770212] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929b1c0 00:22:59.425 [2024-07-25 19:13:51.770306] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:59.425 [2024-07-25 19:13:51.770331] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:59.425 [2024-07-25 19:13:51.770348] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928e080 00:22:59.425 [2024-07-25 19:13:51.770439] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:59.425 [2024-07-25 19:13:51.770464] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:59.425 [2024-07-25 19:13:51.770487] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192bf1c0 00:22:59.683 19:13:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 836273 00:22:59.683 19:13:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:22:59.683 19:13:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:59.683 19:13:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:59.683 19:13:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:59.683 19:13:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:59.683 19:13:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:59.683 19:13:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:22:59.683 19:13:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:59.683 19:13:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:59.683 19:13:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:22:59.683 19:13:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:59.683 19:13:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:59.683 rmmod nvme_rdma 00:22:59.683 rmmod nvme_fabrics 00:22:59.683 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 121: 836273 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:22:59.683 19:13:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:59.683 19:13:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:22:59.683 19:13:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:22:59.683 19:13:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:59.683 19:13:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:59.683 19:13:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:59.683 00:22:59.683 real 0m5.061s 00:22:59.683 user 0m17.220s 00:22:59.683 sys 0m1.107s 00:22:59.683 19:13:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:59.683 19:13:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:59.683 ************************************ 00:22:59.683 END TEST nvmf_shutdown_tc3 00:22:59.683 ************************************ 00:22:59.941 19:13:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:22:59.941 00:22:59.941 real 0m23.604s 00:22:59.941 user 1m10.881s 00:22:59.941 sys 0m7.832s 00:22:59.941 19:13:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:59.941 19:13:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:59.941 ************************************ 00:22:59.941 END TEST nvmf_shutdown 00:22:59.941 ************************************ 00:22:59.941 19:13:52 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:22:59.941 00:22:59.941 real 10m25.414s 00:22:59.941 user 26m31.448s 00:22:59.941 sys 1m51.454s 00:22:59.941 19:13:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:59.941 19:13:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:59.941 ************************************ 00:22:59.941 END TEST nvmf_target_extra 00:22:59.941 ************************************ 00:22:59.941 19:13:52 nvmf_rdma -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:22:59.941 19:13:52 nvmf_rdma -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:59.941 19:13:52 nvmf_rdma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:59.941 19:13:52 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:59.941 ************************************ 00:22:59.941 START TEST nvmf_host 00:22:59.941 ************************************ 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:22:59.941 * Looking for test storage... 00:22:59.941 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:59.941 19:13:52 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.198 ************************************ 00:23:00.198 START TEST nvmf_multicontroller 00:23:00.198 ************************************ 00:23:00.198 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:23:00.198 * Looking for test storage... 00:23:00.198 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:00.198 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:00.198 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:00.198 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:00.198 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:00.198 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:00.198 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:00.198 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:00.198 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:23:00.199 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:23:00.199 00:23:00.199 real 0m0.120s 00:23:00.199 user 0m0.062s 00:23:00.199 sys 0m0.066s 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.199 ************************************ 00:23:00.199 END TEST nvmf_multicontroller 00:23:00.199 ************************************ 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.199 ************************************ 00:23:00.199 START TEST nvmf_aer 00:23:00.199 ************************************ 00:23:00.199 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:23:00.457 * Looking for test storage... 00:23:00.457 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:23:00.458 19:13:52 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:23:07.025 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:23:07.025 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:23:07.025 Found net devices under 0000:af:00.0: mlx_0_0 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:23:07.025 Found net devices under 0000:af:00.1: mlx_0_1 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # rdma_device_init 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # uname 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # allocate_nic_ips 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:07.025 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:07.026 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:07.026 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:23:07.026 altname enp175s0f0np0 00:23:07.026 altname ens801f0np0 00:23:07.026 inet 192.168.100.8/24 scope global mlx_0_0 00:23:07.026 valid_lft forever preferred_lft forever 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:07.026 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:07.026 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:23:07.026 altname enp175s0f1np1 00:23:07.026 altname ens801f1np1 00:23:07.026 inet 192.168.100.9/24 scope global mlx_0_1 00:23:07.026 valid_lft forever preferred_lft forever 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:23:07.026 192.168.100.9' 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:23:07.026 192.168.100.9' 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@457 -- # head -n 1 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:23:07.026 192.168.100.9' 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@458 -- # tail -n +2 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@458 -- # head -n 1 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=840197 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 840197 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 840197 ']' 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:07.026 19:13:58 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:07.026 [2024-07-25 19:13:58.528258] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:07.026 [2024-07-25 19:13:58.528302] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.026 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.026 [2024-07-25 19:13:58.596374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:07.026 [2024-07-25 19:13:58.673545] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.026 [2024-07-25 19:13:58.673581] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.026 [2024-07-25 19:13:58.673589] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.026 [2024-07-25 19:13:58.673595] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.026 [2024-07-25 19:13:58.673600] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.026 [2024-07-25 19:13:58.673655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.026 [2024-07-25 19:13:58.673762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.026 [2024-07-25 19:13:58.673788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.026 [2024-07-25 19:13:58.673789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:07.026 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:07.026 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:23:07.026 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:07.026 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:07.026 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:07.026 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.026 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:07.026 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.026 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:07.026 [2024-07-25 19:13:59.442467] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1dbfdf0/0x1dc42e0) succeed. 00:23:07.026 [2024-07-25 19:13:59.451806] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1dc1430/0x1e05980) succeed. 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:07.285 Malloc0 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:07.285 [2024-07-25 19:13:59.619501] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:07.285 [ 00:23:07.285 { 00:23:07.285 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:07.285 "subtype": "Discovery", 00:23:07.285 "listen_addresses": [], 00:23:07.285 "allow_any_host": true, 00:23:07.285 "hosts": [] 00:23:07.285 }, 00:23:07.285 { 00:23:07.285 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.285 "subtype": "NVMe", 00:23:07.285 "listen_addresses": [ 00:23:07.285 { 00:23:07.285 "trtype": "RDMA", 00:23:07.285 "adrfam": "IPv4", 00:23:07.285 "traddr": "192.168.100.8", 00:23:07.285 "trsvcid": "4420" 00:23:07.285 } 00:23:07.285 ], 00:23:07.285 "allow_any_host": true, 00:23:07.285 "hosts": [], 00:23:07.285 "serial_number": "SPDK00000000000001", 00:23:07.285 "model_number": "SPDK bdev Controller", 00:23:07.285 "max_namespaces": 2, 00:23:07.285 "min_cntlid": 1, 00:23:07.285 "max_cntlid": 65519, 00:23:07.285 "namespaces": [ 00:23:07.285 { 00:23:07.285 "nsid": 1, 00:23:07.285 "bdev_name": "Malloc0", 00:23:07.285 "name": "Malloc0", 00:23:07.285 "nguid": "BAF2B95D1719471D85C11E9053E6E386", 00:23:07.285 "uuid": "baf2b95d-1719-471d-85c1-1e9053e6e386" 00:23:07.285 } 00:23:07.285 ] 00:23:07.285 } 00:23:07.285 ] 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=840448 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:07.285 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:23:07.285 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:07.544 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:07.544 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:07.544 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:23:07.544 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:07.544 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.544 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:07.544 Malloc1 00:23:07.544 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.544 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:07.544 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.544 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:07.544 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.544 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:07.544 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.544 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:07.544 [ 00:23:07.544 { 00:23:07.544 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:07.544 "subtype": "Discovery", 00:23:07.544 "listen_addresses": [], 00:23:07.544 "allow_any_host": true, 00:23:07.544 "hosts": [] 00:23:07.544 }, 00:23:07.544 { 00:23:07.544 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.544 "subtype": "NVMe", 00:23:07.544 "listen_addresses": [ 00:23:07.544 { 00:23:07.544 "trtype": "RDMA", 00:23:07.544 "adrfam": "IPv4", 00:23:07.544 "traddr": "192.168.100.8", 00:23:07.544 "trsvcid": "4420" 00:23:07.544 } 00:23:07.544 ], 00:23:07.544 "allow_any_host": true, 00:23:07.544 "hosts": [], 00:23:07.544 "serial_number": "SPDK00000000000001", 00:23:07.544 "model_number": "SPDK bdev Controller", 00:23:07.544 "max_namespaces": 2, 00:23:07.544 "min_cntlid": 1, 00:23:07.544 "max_cntlid": 65519, 00:23:07.544 "namespaces": [ 00:23:07.544 { 00:23:07.544 "nsid": 1, 00:23:07.544 "bdev_name": "Malloc0", 00:23:07.544 "name": "Malloc0", 00:23:07.544 "nguid": "BAF2B95D1719471D85C11E9053E6E386", 00:23:07.544 "uuid": "baf2b95d-1719-471d-85c1-1e9053e6e386" 00:23:07.544 }, 00:23:07.544 { 00:23:07.544 "nsid": 2, 00:23:07.544 "bdev_name": "Malloc1", 00:23:07.544 "name": "Malloc1", 00:23:07.544 "nguid": "CC77FC162AD743349006B3F2236746F5", 00:23:07.544 "uuid": "cc77fc16-2ad7-4334-9006-b3f2236746f5" 00:23:07.544 } 00:23:07.544 ] 00:23:07.544 } 00:23:07.544 ] 00:23:07.544 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.544 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 840448 00:23:07.544 Asynchronous Event Request test 00:23:07.544 Attaching to 192.168.100.8 00:23:07.544 Attached to 192.168.100.8 00:23:07.544 Registering asynchronous event callbacks... 00:23:07.544 Starting namespace attribute notice tests for all controllers... 00:23:07.544 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:07.544 aer_cb - Changed Namespace 00:23:07.544 Cleaning up... 00:23:07.544 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:07.544 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.544 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:07.544 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.544 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:07.544 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.544 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:07.544 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.544 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:07.544 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.544 19:13:59 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:07.544 19:14:00 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.544 19:14:00 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:07.544 19:14:00 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:07.544 19:14:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:07.544 19:14:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:07.544 19:14:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:07.544 19:14:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:07.544 19:14:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:07.544 19:14:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:07.544 19:14:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:07.544 rmmod nvme_rdma 00:23:07.803 rmmod nvme_fabrics 00:23:07.803 19:14:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:07.803 19:14:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:07.803 19:14:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:07.803 19:14:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 840197 ']' 00:23:07.803 19:14:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 840197 00:23:07.803 19:14:00 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 840197 ']' 00:23:07.803 19:14:00 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 840197 00:23:07.803 19:14:00 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:23:07.803 19:14:00 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:07.803 19:14:00 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 840197 00:23:07.803 19:14:00 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:07.803 19:14:00 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:07.803 19:14:00 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 840197' 00:23:07.803 killing process with pid 840197 00:23:07.803 19:14:00 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 840197 00:23:07.803 19:14:00 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 840197 00:23:08.062 19:14:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:08.062 19:14:00 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:23:08.062 00:23:08.062 real 0m7.751s 00:23:08.062 user 0m8.429s 00:23:08.062 sys 0m4.764s 00:23:08.062 19:14:00 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:08.062 19:14:00 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.062 ************************************ 00:23:08.062 END TEST nvmf_aer 00:23:08.062 ************************************ 00:23:08.062 19:14:00 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:23:08.062 19:14:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:08.062 19:14:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:08.062 19:14:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.062 ************************************ 00:23:08.062 START TEST nvmf_async_init 00:23:08.062 ************************************ 00:23:08.062 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:23:08.062 * Looking for test storage... 00:23:08.062 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:08.062 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:08.062 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:08.322 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:08.322 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:08.322 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:08.322 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:08.322 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:08.322 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:08.322 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:08.322 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:08.322 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:08.322 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:08.322 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:23:08.322 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:23:08.322 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:08.322 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:08.322 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:08.322 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:08.322 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:08.322 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:08.322 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:08.322 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:08.323 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.323 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.323 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.323 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:08.323 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.323 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:23:08.323 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:08.323 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:08.323 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:08.323 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:08.323 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:08.323 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:08.323 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:08.323 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:08.323 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:08.323 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:08.323 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:08.323 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:08.323 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:08.323 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:08.323 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=de72029d209a41e599be20185f068c76 00:23:08.323 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:08.323 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:08.323 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:08.323 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:08.323 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:08.323 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:08.323 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.323 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:08.323 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:08.323 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:08.323 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:08.323 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:23:08.323 19:14:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.920 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:14.920 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:23:14.920 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:14.920 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:14.920 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:14.920 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:14.920 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:14.920 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:23:14.920 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:14.920 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:23:14.920 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:23:14.920 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:23:14.920 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:23:14.920 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:23:14.920 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:23:14.920 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:14.920 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:14.920 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:14.920 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:14.920 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:14.920 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:14.920 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:14.920 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:14.920 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:14.920 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:14.920 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:23:14.921 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:23:14.921 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:23:14.921 Found net devices under 0000:af:00.0: mlx_0_0 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:23:14.921 Found net devices under 0000:af:00.1: mlx_0_1 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # rdma_device_init 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # uname 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # allocate_nic_ips 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:14.921 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:14.921 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:23:14.921 altname enp175s0f0np0 00:23:14.921 altname ens801f0np0 00:23:14.921 inet 192.168.100.8/24 scope global mlx_0_0 00:23:14.921 valid_lft forever preferred_lft forever 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:14.921 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:14.921 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:23:14.921 altname enp175s0f1np1 00:23:14.921 altname ens801f1np1 00:23:14.921 inet 192.168.100.9/24 scope global mlx_0_1 00:23:14.921 valid_lft forever preferred_lft forever 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:14.921 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:23:14.922 192.168.100.9' 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:23:14.922 192.168.100.9' 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@457 -- # head -n 1 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:23:14.922 192.168.100.9' 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@458 -- # tail -n +2 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@458 -- # head -n 1 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=843559 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 843559 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 843559 ']' 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:14.922 19:14:06 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.922 [2024-07-25 19:14:06.443626] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:14.922 [2024-07-25 19:14:06.443673] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.922 EAL: No free 2048 kB hugepages reported on node 1 00:23:14.922 [2024-07-25 19:14:06.513584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.922 [2024-07-25 19:14:06.591518] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.922 [2024-07-25 19:14:06.591557] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.922 [2024-07-25 19:14:06.591564] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.922 [2024-07-25 19:14:06.591571] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.922 [2024-07-25 19:14:06.591575] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.922 [2024-07-25 19:14:06.591597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.922 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:14.922 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:23:14.922 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:14.922 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:14.922 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.922 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.922 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:23:14.922 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.922 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.922 [2024-07-25 19:14:07.342197] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc2cbb0/0xc310a0) succeed. 00:23:14.922 [2024-07-25 19:14:07.352060] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc2e0b0/0xc72740) succeed. 00:23:15.182 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.182 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:15.182 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.182 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.182 null0 00:23:15.182 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.182 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:15.182 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.182 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.182 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.182 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:15.182 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.182 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.182 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.182 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g de72029d209a41e599be20185f068c76 00:23:15.182 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.182 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.182 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.182 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:23:15.182 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.182 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.182 [2024-07-25 19:14:07.453646] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:15.182 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.182 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:15.182 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.182 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.182 nvme0n1 00:23:15.182 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.182 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:15.183 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.183 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.183 [ 00:23:15.183 { 00:23:15.183 "name": "nvme0n1", 00:23:15.183 "aliases": [ 00:23:15.183 "de72029d-209a-41e5-99be-20185f068c76" 00:23:15.183 ], 00:23:15.183 "product_name": "NVMe disk", 00:23:15.183 "block_size": 512, 00:23:15.183 "num_blocks": 2097152, 00:23:15.183 "uuid": "de72029d-209a-41e5-99be-20185f068c76", 00:23:15.183 "assigned_rate_limits": { 00:23:15.183 "rw_ios_per_sec": 0, 00:23:15.183 "rw_mbytes_per_sec": 0, 00:23:15.183 "r_mbytes_per_sec": 0, 00:23:15.183 "w_mbytes_per_sec": 0 00:23:15.183 }, 00:23:15.183 "claimed": false, 00:23:15.183 "zoned": false, 00:23:15.183 "supported_io_types": { 00:23:15.183 "read": true, 00:23:15.183 "write": true, 00:23:15.183 "unmap": false, 00:23:15.183 "flush": true, 00:23:15.183 "reset": true, 00:23:15.183 "nvme_admin": true, 00:23:15.183 "nvme_io": true, 00:23:15.183 "nvme_io_md": false, 00:23:15.183 "write_zeroes": true, 00:23:15.183 "zcopy": false, 00:23:15.183 "get_zone_info": false, 00:23:15.183 "zone_management": false, 00:23:15.183 "zone_append": false, 00:23:15.183 "compare": true, 00:23:15.183 "compare_and_write": true, 00:23:15.183 "abort": true, 00:23:15.183 "seek_hole": false, 00:23:15.183 "seek_data": false, 00:23:15.183 "copy": true, 00:23:15.183 "nvme_iov_md": false 00:23:15.183 }, 00:23:15.183 "memory_domains": [ 00:23:15.183 { 00:23:15.183 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:23:15.183 "dma_device_type": 0 00:23:15.183 } 00:23:15.183 ], 00:23:15.183 "driver_specific": { 00:23:15.183 "nvme": [ 00:23:15.183 { 00:23:15.183 "trid": { 00:23:15.183 "trtype": "RDMA", 00:23:15.183 "adrfam": "IPv4", 00:23:15.183 "traddr": "192.168.100.8", 00:23:15.183 "trsvcid": "4420", 00:23:15.183 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:15.183 }, 00:23:15.183 "ctrlr_data": { 00:23:15.183 "cntlid": 1, 00:23:15.183 "vendor_id": "0x8086", 00:23:15.183 "model_number": "SPDK bdev Controller", 00:23:15.183 "serial_number": "00000000000000000000", 00:23:15.183 "firmware_revision": "24.09", 00:23:15.183 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:15.183 "oacs": { 00:23:15.183 "security": 0, 00:23:15.183 "format": 0, 00:23:15.183 "firmware": 0, 00:23:15.183 "ns_manage": 0 00:23:15.183 }, 00:23:15.183 "multi_ctrlr": true, 00:23:15.183 "ana_reporting": false 00:23:15.183 }, 00:23:15.183 "vs": { 00:23:15.183 "nvme_version": "1.3" 00:23:15.183 }, 00:23:15.183 "ns_data": { 00:23:15.183 "id": 1, 00:23:15.183 "can_share": true 00:23:15.183 } 00:23:15.183 } 00:23:15.183 ], 00:23:15.183 "mp_policy": "active_passive" 00:23:15.183 } 00:23:15.183 } 00:23:15.183 ] 00:23:15.183 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.183 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:15.183 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.183 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.183 [2024-07-25 19:14:07.572609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:15.183 [2024-07-25 19:14:07.596549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:15.183 [2024-07-25 19:14:07.628170] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:15.183 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.183 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:15.183 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.183 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.183 [ 00:23:15.183 { 00:23:15.183 "name": "nvme0n1", 00:23:15.183 "aliases": [ 00:23:15.183 "de72029d-209a-41e5-99be-20185f068c76" 00:23:15.183 ], 00:23:15.183 "product_name": "NVMe disk", 00:23:15.183 "block_size": 512, 00:23:15.183 "num_blocks": 2097152, 00:23:15.183 "uuid": "de72029d-209a-41e5-99be-20185f068c76", 00:23:15.183 "assigned_rate_limits": { 00:23:15.183 "rw_ios_per_sec": 0, 00:23:15.183 "rw_mbytes_per_sec": 0, 00:23:15.183 "r_mbytes_per_sec": 0, 00:23:15.183 "w_mbytes_per_sec": 0 00:23:15.183 }, 00:23:15.183 "claimed": false, 00:23:15.183 "zoned": false, 00:23:15.183 "supported_io_types": { 00:23:15.183 "read": true, 00:23:15.183 "write": true, 00:23:15.183 "unmap": false, 00:23:15.183 "flush": true, 00:23:15.183 "reset": true, 00:23:15.183 "nvme_admin": true, 00:23:15.183 "nvme_io": true, 00:23:15.183 "nvme_io_md": false, 00:23:15.183 "write_zeroes": true, 00:23:15.183 "zcopy": false, 00:23:15.183 "get_zone_info": false, 00:23:15.183 "zone_management": false, 00:23:15.183 "zone_append": false, 00:23:15.183 "compare": true, 00:23:15.183 "compare_and_write": true, 00:23:15.183 "abort": true, 00:23:15.183 "seek_hole": false, 00:23:15.183 "seek_data": false, 00:23:15.183 "copy": true, 00:23:15.183 "nvme_iov_md": false 00:23:15.183 }, 00:23:15.183 "memory_domains": [ 00:23:15.183 { 00:23:15.183 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:23:15.183 "dma_device_type": 0 00:23:15.183 } 00:23:15.183 ], 00:23:15.183 "driver_specific": { 00:23:15.183 "nvme": [ 00:23:15.183 { 00:23:15.183 "trid": { 00:23:15.183 "trtype": "RDMA", 00:23:15.183 "adrfam": "IPv4", 00:23:15.183 "traddr": "192.168.100.8", 00:23:15.183 "trsvcid": "4420", 00:23:15.183 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:15.183 }, 00:23:15.183 "ctrlr_data": { 00:23:15.183 "cntlid": 2, 00:23:15.183 "vendor_id": "0x8086", 00:23:15.183 "model_number": "SPDK bdev Controller", 00:23:15.183 "serial_number": "00000000000000000000", 00:23:15.183 "firmware_revision": "24.09", 00:23:15.183 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:15.183 "oacs": { 00:23:15.183 "security": 0, 00:23:15.183 "format": 0, 00:23:15.183 "firmware": 0, 00:23:15.183 "ns_manage": 0 00:23:15.183 }, 00:23:15.183 "multi_ctrlr": true, 00:23:15.183 "ana_reporting": false 00:23:15.183 }, 00:23:15.183 "vs": { 00:23:15.183 "nvme_version": "1.3" 00:23:15.183 }, 00:23:15.183 "ns_data": { 00:23:15.183 "id": 1, 00:23:15.183 "can_share": true 00:23:15.183 } 00:23:15.183 } 00:23:15.183 ], 00:23:15.183 "mp_policy": "active_passive" 00:23:15.183 } 00:23:15.183 } 00:23:15.183 ] 00:23:15.183 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.AXDeOr2ZvB 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.AXDeOr2ZvB 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.443 [2024-07-25 19:14:07.702797] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AXDeOr2ZvB 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AXDeOr2ZvB 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.443 [2024-07-25 19:14:07.722841] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:15.443 nvme0n1 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.443 [ 00:23:15.443 { 00:23:15.443 "name": "nvme0n1", 00:23:15.443 "aliases": [ 00:23:15.443 "de72029d-209a-41e5-99be-20185f068c76" 00:23:15.443 ], 00:23:15.443 "product_name": "NVMe disk", 00:23:15.443 "block_size": 512, 00:23:15.443 "num_blocks": 2097152, 00:23:15.443 "uuid": "de72029d-209a-41e5-99be-20185f068c76", 00:23:15.443 "assigned_rate_limits": { 00:23:15.443 "rw_ios_per_sec": 0, 00:23:15.443 "rw_mbytes_per_sec": 0, 00:23:15.443 "r_mbytes_per_sec": 0, 00:23:15.443 "w_mbytes_per_sec": 0 00:23:15.443 }, 00:23:15.443 "claimed": false, 00:23:15.443 "zoned": false, 00:23:15.443 "supported_io_types": { 00:23:15.443 "read": true, 00:23:15.443 "write": true, 00:23:15.443 "unmap": false, 00:23:15.443 "flush": true, 00:23:15.443 "reset": true, 00:23:15.443 "nvme_admin": true, 00:23:15.443 "nvme_io": true, 00:23:15.443 "nvme_io_md": false, 00:23:15.443 "write_zeroes": true, 00:23:15.443 "zcopy": false, 00:23:15.443 "get_zone_info": false, 00:23:15.443 "zone_management": false, 00:23:15.443 "zone_append": false, 00:23:15.443 "compare": true, 00:23:15.443 "compare_and_write": true, 00:23:15.443 "abort": true, 00:23:15.443 "seek_hole": false, 00:23:15.443 "seek_data": false, 00:23:15.443 "copy": true, 00:23:15.443 "nvme_iov_md": false 00:23:15.443 }, 00:23:15.443 "memory_domains": [ 00:23:15.443 { 00:23:15.443 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:23:15.443 "dma_device_type": 0 00:23:15.443 } 00:23:15.443 ], 00:23:15.443 "driver_specific": { 00:23:15.443 "nvme": [ 00:23:15.443 { 00:23:15.443 "trid": { 00:23:15.443 "trtype": "RDMA", 00:23:15.443 "adrfam": "IPv4", 00:23:15.443 "traddr": "192.168.100.8", 00:23:15.443 "trsvcid": "4421", 00:23:15.443 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:15.443 }, 00:23:15.443 "ctrlr_data": { 00:23:15.443 "cntlid": 3, 00:23:15.443 "vendor_id": "0x8086", 00:23:15.443 "model_number": "SPDK bdev Controller", 00:23:15.443 "serial_number": "00000000000000000000", 00:23:15.443 "firmware_revision": "24.09", 00:23:15.443 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:15.443 "oacs": { 00:23:15.443 "security": 0, 00:23:15.443 "format": 0, 00:23:15.443 "firmware": 0, 00:23:15.443 "ns_manage": 0 00:23:15.443 }, 00:23:15.443 "multi_ctrlr": true, 00:23:15.443 "ana_reporting": false 00:23:15.443 }, 00:23:15.443 "vs": { 00:23:15.443 "nvme_version": "1.3" 00:23:15.443 }, 00:23:15.443 "ns_data": { 00:23:15.443 "id": 1, 00:23:15.443 "can_share": true 00:23:15.443 } 00:23:15.443 } 00:23:15.443 ], 00:23:15.443 "mp_policy": "active_passive" 00:23:15.443 } 00:23:15.443 } 00:23:15.443 ] 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.AXDeOr2ZvB 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:15.443 rmmod nvme_rdma 00:23:15.443 rmmod nvme_fabrics 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 843559 ']' 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 843559 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 843559 ']' 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 843559 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:15.443 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 843559 00:23:15.702 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:15.702 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:15.702 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 843559' 00:23:15.702 killing process with pid 843559 00:23:15.702 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 843559 00:23:15.702 19:14:07 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 843559 00:23:15.702 19:14:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:15.702 19:14:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:23:15.702 00:23:15.702 real 0m7.717s 00:23:15.702 user 0m3.599s 00:23:15.702 sys 0m4.771s 00:23:15.702 19:14:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:15.702 19:14:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.702 ************************************ 00:23:15.702 END TEST nvmf_async_init 00:23:15.702 ************************************ 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.962 ************************************ 00:23:15.962 START TEST dma 00:23:15.962 ************************************ 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:23:15.962 * Looking for test storage... 00:23:15.962 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- host/dma.sh@18 -- # subsystem=0 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- host/dma.sh@93 -- # nvmftestinit 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@285 -- # xtrace_disable 00:23:15.962 19:14:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:22.532 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:22.532 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@291 -- # pci_devs=() 00:23:22.532 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:22.532 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:22.532 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:22.532 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:22.532 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:22.532 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@295 -- # net_devs=() 00:23:22.532 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:22.532 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@296 -- # e810=() 00:23:22.532 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@296 -- # local -ga e810 00:23:22.532 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@297 -- # x722=() 00:23:22.532 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@297 -- # local -ga x722 00:23:22.532 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@298 -- # mlx=() 00:23:22.532 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@298 -- # local -ga mlx 00:23:22.532 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:22.532 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:22.532 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:22.532 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:22.532 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:23:22.533 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:23:22.533 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:23:22.533 Found net devices under 0000:af:00.0: mlx_0_0 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:23:22.533 Found net devices under 0000:af:00.1: mlx_0_1 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@414 -- # is_hw=yes 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@420 -- # rdma_device_init 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # uname 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:22.533 19:14:13 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # continue 2 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # continue 2 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:22.533 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:22.533 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:23:22.533 altname enp175s0f0np0 00:23:22.533 altname ens801f0np0 00:23:22.533 inet 192.168.100.8/24 scope global mlx_0_0 00:23:22.533 valid_lft forever preferred_lft forever 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:22.533 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:22.533 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:23:22.533 altname enp175s0f1np1 00:23:22.533 altname ens801f1np1 00:23:22.533 inet 192.168.100.9/24 scope global mlx_0_1 00:23:22.533 valid_lft forever preferred_lft forever 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # return 0 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:22.533 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # continue 2 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # continue 2 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:23:22.534 192.168.100.9' 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:23:22.534 192.168.100.9' 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@457 -- # head -n 1 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:23:22.534 192.168.100.9' 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@458 -- # tail -n +2 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@458 -- # head -n 1 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@481 -- # nvmfpid=847117 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # waitforlisten 847117 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@831 -- # '[' -z 847117 ']' 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:22.534 19:14:14 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:22.534 [2024-07-25 19:14:14.208881] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:22.534 [2024-07-25 19:14:14.208933] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:22.534 EAL: No free 2048 kB hugepages reported on node 1 00:23:22.534 [2024-07-25 19:14:14.273681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:22.534 [2024-07-25 19:14:14.344714] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:22.534 [2024-07-25 19:14:14.344754] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:22.534 [2024-07-25 19:14:14.344760] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:22.534 [2024-07-25 19:14:14.344767] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:22.534 [2024-07-25 19:14:14.344771] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:22.534 [2024-07-25 19:14:14.344846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:22.534 [2024-07-25 19:14:14.344847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.794 19:14:15 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:22.794 19:14:15 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@864 -- # return 0 00:23:22.794 19:14:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:22.794 19:14:15 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:22.794 19:14:15 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:22.794 19:14:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:22.794 19:14:15 nvmf_rdma.nvmf_host.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:23:22.794 19:14:15 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.794 19:14:15 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:22.794 [2024-07-25 19:14:15.099758] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd56720/0xd5ac10) succeed. 00:23:22.794 [2024-07-25 19:14:15.108683] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd57c20/0xd9c2b0) succeed. 00:23:22.794 19:14:15 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.794 19:14:15 nvmf_rdma.nvmf_host.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:23:22.794 19:14:15 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.794 19:14:15 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:22.794 Malloc0 00:23:22.794 19:14:15 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.794 19:14:15 nvmf_rdma.nvmf_host.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:23:22.794 19:14:15 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.794 19:14:15 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:22.794 19:14:15 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.794 19:14:15 nvmf_rdma.nvmf_host.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:23:22.794 19:14:15 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.794 19:14:15 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:22.794 19:14:15 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.794 19:14:15 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:23:22.794 19:14:15 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.794 19:14:15 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:22.794 [2024-07-25 19:14:15.261626] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:23.054 19:14:15 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.054 19:14:15 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:23:23.054 19:14:15 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:23:23.054 19:14:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@532 -- # config=() 00:23:23.054 19:14:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@532 -- # local subsystem config 00:23:23.054 19:14:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:23.054 19:14:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:23.054 { 00:23:23.054 "params": { 00:23:23.054 "name": "Nvme$subsystem", 00:23:23.054 "trtype": "$TEST_TRANSPORT", 00:23:23.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.054 "adrfam": "ipv4", 00:23:23.054 "trsvcid": "$NVMF_PORT", 00:23:23.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.054 "hdgst": ${hdgst:-false}, 00:23:23.054 "ddgst": ${ddgst:-false} 00:23:23.054 }, 00:23:23.054 "method": "bdev_nvme_attach_controller" 00:23:23.054 } 00:23:23.054 EOF 00:23:23.054 )") 00:23:23.054 19:14:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@554 -- # cat 00:23:23.054 19:14:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@556 -- # jq . 00:23:23.054 19:14:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@557 -- # IFS=, 00:23:23.054 19:14:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:23.054 "params": { 00:23:23.054 "name": "Nvme0", 00:23:23.054 "trtype": "rdma", 00:23:23.054 "traddr": "192.168.100.8", 00:23:23.054 "adrfam": "ipv4", 00:23:23.054 "trsvcid": "4420", 00:23:23.054 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:23.054 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:23.054 "hdgst": false, 00:23:23.054 "ddgst": false 00:23:23.054 }, 00:23:23.054 "method": "bdev_nvme_attach_controller" 00:23:23.054 }' 00:23:23.054 [2024-07-25 19:14:15.309839] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:23.054 [2024-07-25 19:14:15.309879] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid847233 ] 00:23:23.054 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.054 [2024-07-25 19:14:15.376273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:23.054 [2024-07-25 19:14:15.448669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:23.054 [2024-07-25 19:14:15.448671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:29.616 bdev Nvme0n1 reports 1 memory domains 00:23:29.616 bdev Nvme0n1 supports RDMA memory domain 00:23:29.616 Initialization complete, running randrw IO for 5 sec on 2 cores 00:23:29.616 ========================================================================== 00:23:29.616 Latency [us] 00:23:29.616 IOPS MiB/s Average min max 00:23:29.616 Core 2: 20967.94 81.91 762.32 258.34 8584.13 00:23:29.616 Core 3: 21071.73 82.31 758.57 249.88 8728.48 00:23:29.616 ========================================================================== 00:23:29.616 Total : 42039.67 164.22 760.44 249.88 8728.48 00:23:29.616 00:23:29.616 Total operations: 210225, translate 210225 pull_push 0 memzero 0 00:23:29.616 19:14:20 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:23:29.616 19:14:20 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # gen_malloc_json 00:23:29.616 19:14:20 nvmf_rdma.nvmf_host.dma -- host/dma.sh@21 -- # jq . 00:23:29.616 [2024-07-25 19:14:20.886266] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:29.616 [2024-07-25 19:14:20.886317] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid848094 ] 00:23:29.616 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.616 [2024-07-25 19:14:20.955904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:29.616 [2024-07-25 19:14:21.028235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:29.616 [2024-07-25 19:14:21.028238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:34.886 bdev Malloc0 reports 2 memory domains 00:23:34.886 bdev Malloc0 doesn't support RDMA memory domain 00:23:34.886 Initialization complete, running randrw IO for 5 sec on 2 cores 00:23:34.886 ========================================================================== 00:23:34.886 Latency [us] 00:23:34.886 IOPS MiB/s Average min max 00:23:34.886 Core 2: 13946.35 54.48 1146.45 433.10 1422.63 00:23:34.886 Core 3: 13916.56 54.36 1148.88 467.84 1846.05 00:23:34.886 ========================================================================== 00:23:34.886 Total : 27862.91 108.84 1147.67 433.10 1846.05 00:23:34.886 00:23:34.886 Total operations: 139369, translate 0 pull_push 557476 memzero 0 00:23:34.886 19:14:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:23:34.886 19:14:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:23:34.886 19:14:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:23:34.886 19:14:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:23:34.886 Ignoring -M option 00:23:34.886 [2024-07-25 19:14:26.377794] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:34.886 [2024-07-25 19:14:26.377853] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid849012 ] 00:23:34.886 EAL: No free 2048 kB hugepages reported on node 1 00:23:34.886 [2024-07-25 19:14:26.446859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:34.886 [2024-07-25 19:14:26.519022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:34.886 [2024-07-25 19:14:26.519022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:40.155 bdev 99eaddcf-dd8f-44be-8290-6a0ff6204821 reports 1 memory domains 00:23:40.155 bdev 99eaddcf-dd8f-44be-8290-6a0ff6204821 supports RDMA memory domain 00:23:40.155 Initialization complete, running randread IO for 5 sec on 2 cores 00:23:40.155 ========================================================================== 00:23:40.155 Latency [us] 00:23:40.155 IOPS MiB/s Average min max 00:23:40.155 Core 2: 72759.47 284.22 219.11 80.26 2859.75 00:23:40.155 Core 3: 72168.35 281.91 220.91 70.35 2799.47 00:23:40.155 ========================================================================== 00:23:40.155 Total : 144927.83 566.12 220.01 70.35 2859.75 00:23:40.155 00:23:40.155 Total operations: 724740, translate 0 pull_push 0 memzero 724740 00:23:40.155 19:14:31 nvmf_rdma.nvmf_host.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:23:40.155 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.155 [2024-07-25 19:14:32.068844] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:42.056 Initializing NVMe Controllers 00:23:42.056 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:23:42.056 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:23:42.056 Initialization complete. Launching workers. 00:23:42.056 ======================================================== 00:23:42.056 Latency(us) 00:23:42.056 Device Information : IOPS MiB/s Average min max 00:23:42.056 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.88 7972.07 6997.39 8014.51 00:23:42.056 ======================================================== 00:23:42.056 Total : 2016.00 7.88 7972.07 6997.39 8014.51 00:23:42.056 00:23:42.056 19:14:34 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:23:42.056 19:14:34 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:23:42.056 19:14:34 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:23:42.056 19:14:34 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:23:42.056 [2024-07-25 19:14:34.410209] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:42.056 [2024-07-25 19:14:34.410253] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid850397 ] 00:23:42.056 EAL: No free 2048 kB hugepages reported on node 1 00:23:42.056 [2024-07-25 19:14:34.480146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:42.314 [2024-07-25 19:14:34.553506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:42.314 [2024-07-25 19:14:34.553508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:47.583 bdev 9653adde-1527-4032-bc11-70166b575fa6 reports 1 memory domains 00:23:47.583 bdev 9653adde-1527-4032-bc11-70166b575fa6 supports RDMA memory domain 00:23:47.583 Initialization complete, running randrw IO for 5 sec on 2 cores 00:23:47.583 ========================================================================== 00:23:47.583 Latency [us] 00:23:47.583 IOPS MiB/s Average min max 00:23:47.583 Core 2: 18781.22 73.36 851.18 17.45 10351.75 00:23:47.583 Core 3: 18503.49 72.28 863.99 21.56 9998.91 00:23:47.583 ========================================================================== 00:23:47.583 Total : 37284.71 145.64 857.54 17.45 10351.75 00:23:47.583 00:23:47.583 Total operations: 186475, translate 186370 pull_push 0 memzero 105 00:23:47.583 19:14:40 nvmf_rdma.nvmf_host.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:23:47.583 19:14:40 nvmf_rdma.nvmf_host.dma -- host/dma.sh@120 -- # nvmftestfini 00:23:47.583 19:14:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:47.583 19:14:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # sync 00:23:47.583 19:14:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:47.583 19:14:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:47.583 19:14:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@120 -- # set +e 00:23:47.583 19:14:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:47.583 19:14:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:47.583 rmmod nvme_rdma 00:23:47.583 rmmod nvme_fabrics 00:23:47.842 19:14:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:47.842 19:14:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@124 -- # set -e 00:23:47.842 19:14:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@125 -- # return 0 00:23:47.842 19:14:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@489 -- # '[' -n 847117 ']' 00:23:47.842 19:14:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@490 -- # killprocess 847117 00:23:47.842 19:14:40 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@950 -- # '[' -z 847117 ']' 00:23:47.842 19:14:40 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@954 -- # kill -0 847117 00:23:47.842 19:14:40 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@955 -- # uname 00:23:47.842 19:14:40 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:47.842 19:14:40 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 847117 00:23:47.842 19:14:40 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:47.842 19:14:40 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:47.842 19:14:40 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@968 -- # echo 'killing process with pid 847117' 00:23:47.842 killing process with pid 847117 00:23:47.842 19:14:40 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@969 -- # kill 847117 00:23:47.842 19:14:40 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@974 -- # wait 847117 00:23:48.101 19:14:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:48.101 19:14:40 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:23:48.101 00:23:48.101 real 0m32.210s 00:23:48.101 user 1m36.807s 00:23:48.101 sys 0m5.435s 00:23:48.101 19:14:40 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:48.101 19:14:40 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:48.101 ************************************ 00:23:48.101 END TEST dma 00:23:48.101 ************************************ 00:23:48.101 19:14:40 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:23:48.101 19:14:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:48.101 19:14:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:48.101 19:14:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.101 ************************************ 00:23:48.101 START TEST nvmf_identify 00:23:48.101 ************************************ 00:23:48.101 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:23:48.360 * Looking for test storage... 00:23:48.360 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:23:48.360 19:14:40 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:23:54.930 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:23:54.930 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:54.930 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:23:54.931 Found net devices under 0000:af:00.0: mlx_0_0 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:23:54.931 Found net devices under 0000:af:00.1: mlx_0_1 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # rdma_device_init 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # uname 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # allocate_nic_ips 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:54.931 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:54.931 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:23:54.931 altname enp175s0f0np0 00:23:54.931 altname ens801f0np0 00:23:54.931 inet 192.168.100.8/24 scope global mlx_0_0 00:23:54.931 valid_lft forever preferred_lft forever 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:54.931 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:54.931 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:23:54.931 altname enp175s0f1np1 00:23:54.931 altname ens801f1np1 00:23:54.931 inet 192.168.100.9/24 scope global mlx_0_1 00:23:54.931 valid_lft forever preferred_lft forever 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:54.931 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:54.932 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:23:54.932 192.168.100.9' 00:23:54.932 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:23:54.932 192.168.100.9' 00:23:54.932 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # head -n 1 00:23:54.932 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:54.932 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:23:54.932 192.168.100.9' 00:23:54.932 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@458 -- # tail -n +2 00:23:54.932 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@458 -- # head -n 1 00:23:54.932 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:54.932 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:23:54.932 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:54.932 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:23:54.932 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:23:54.932 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:23:54.932 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:54.932 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:54.932 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.932 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=854439 00:23:54.932 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:54.932 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:54.932 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 854439 00:23:54.932 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 854439 ']' 00:23:54.932 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.932 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:54.932 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.932 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:54.932 19:14:46 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.932 [2024-07-25 19:14:46.527608] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:54.932 [2024-07-25 19:14:46.527655] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:54.932 EAL: No free 2048 kB hugepages reported on node 1 00:23:54.932 [2024-07-25 19:14:46.596534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:54.932 [2024-07-25 19:14:46.675509] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:54.932 [2024-07-25 19:14:46.675545] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:54.932 [2024-07-25 19:14:46.675552] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:54.932 [2024-07-25 19:14:46.675558] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:54.932 [2024-07-25 19:14:46.675564] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:54.932 [2024-07-25 19:14:46.675609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.932 [2024-07-25 19:14:46.675721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:54.932 [2024-07-25 19:14:46.675827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.932 [2024-07-25 19:14:46.675828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:54.932 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:54.932 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:23:54.932 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:54.932 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.932 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.932 [2024-07-25 19:14:47.390717] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x131cdf0/0x13212e0) succeed. 00:23:55.191 [2024-07-25 19:14:47.400124] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x131e430/0x1362980) succeed. 00:23:55.191 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.191 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:55.191 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:55.191 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.191 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:55.191 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.191 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.191 Malloc0 00:23:55.191 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.191 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:55.191 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.191 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.191 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.191 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:55.191 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.191 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.191 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.192 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:55.192 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.192 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.192 [2024-07-25 19:14:47.604293] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:55.192 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.192 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:23:55.192 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.192 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.192 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.192 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:55.192 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.192 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.192 [ 00:23:55.192 { 00:23:55.192 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:55.192 "subtype": "Discovery", 00:23:55.192 "listen_addresses": [ 00:23:55.192 { 00:23:55.192 "trtype": "RDMA", 00:23:55.192 "adrfam": "IPv4", 00:23:55.192 "traddr": "192.168.100.8", 00:23:55.192 "trsvcid": "4420" 00:23:55.192 } 00:23:55.192 ], 00:23:55.192 "allow_any_host": true, 00:23:55.192 "hosts": [] 00:23:55.192 }, 00:23:55.192 { 00:23:55.192 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.192 "subtype": "NVMe", 00:23:55.192 "listen_addresses": [ 00:23:55.192 { 00:23:55.192 "trtype": "RDMA", 00:23:55.192 "adrfam": "IPv4", 00:23:55.192 "traddr": "192.168.100.8", 00:23:55.192 "trsvcid": "4420" 00:23:55.192 } 00:23:55.192 ], 00:23:55.192 "allow_any_host": true, 00:23:55.192 "hosts": [], 00:23:55.192 "serial_number": "SPDK00000000000001", 00:23:55.192 "model_number": "SPDK bdev Controller", 00:23:55.192 "max_namespaces": 32, 00:23:55.192 "min_cntlid": 1, 00:23:55.192 "max_cntlid": 65519, 00:23:55.192 "namespaces": [ 00:23:55.192 { 00:23:55.192 "nsid": 1, 00:23:55.192 "bdev_name": "Malloc0", 00:23:55.192 "name": "Malloc0", 00:23:55.192 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:55.192 "eui64": "ABCDEF0123456789", 00:23:55.192 "uuid": "c7abbd99-2c97-4a57-90cd-ef88d1fe7d05" 00:23:55.192 } 00:23:55.192 ] 00:23:55.192 } 00:23:55.192 ] 00:23:55.192 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.192 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:55.192 [2024-07-25 19:14:47.657115] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:55.192 [2024-07-25 19:14:47.657159] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid854689 ] 00:23:55.460 EAL: No free 2048 kB hugepages reported on node 1 00:23:55.460 [2024-07-25 19:14:47.701006] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:55.461 [2024-07-25 19:14:47.701082] nvme_rdma.c:2192:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:23:55.461 [2024-07-25 19:14:47.701096] nvme_rdma.c:1211:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:23:55.461 [2024-07-25 19:14:47.701099] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:23:55.461 [2024-07-25 19:14:47.701133] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:55.461 [2024-07-25 19:14:47.711606] nvme_rdma.c: 430:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:23:55.461 [2024-07-25 19:14:47.722785] nvme_rdma.c:1100:nvme_rdma_connect_established: *DEBUG*: rc =0 00:23:55.461 [2024-07-25 19:14:47.722794] nvme_rdma.c:1105:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:23:55.461 [2024-07-25 19:14:47.722801] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.722807] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.722811] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.722816] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.722820] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.722825] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.722829] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.722833] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.722838] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.722842] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.722846] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.722851] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.722855] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.722859] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.722864] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.722868] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.722872] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.722877] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.722881] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.722885] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.722890] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.722894] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.726902] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.726908] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.726912] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.726917] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.726921] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.726925] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.726933] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.726937] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.726942] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.726945] nvme_rdma.c:1119:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:23:55.461 [2024-07-25 19:14:47.726950] nvme_rdma.c:1122:nvme_rdma_connect_established: *DEBUG*: rc =0 00:23:55.461 [2024-07-25 19:14:47.726953] nvme_rdma.c:1127:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:23:55.461 [2024-07-25 19:14:47.726968] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.726981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x187f00 00:23:55.461 [2024-07-25 19:14:47.734905] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.461 [2024-07-25 19:14:47.734915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:55.461 [2024-07-25 19:14:47.734923] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.734929] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:55.461 [2024-07-25 19:14:47.734935] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:55.461 [2024-07-25 19:14:47.734939] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:55.461 [2024-07-25 19:14:47.734955] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.734962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.461 [2024-07-25 19:14:47.734985] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.461 [2024-07-25 19:14:47.734990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:23:55.461 [2024-07-25 19:14:47.734995] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:55.461 [2024-07-25 19:14:47.734999] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.735004] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:55.461 [2024-07-25 19:14:47.735009] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.735015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.461 [2024-07-25 19:14:47.735034] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.461 [2024-07-25 19:14:47.735038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:23:55.461 [2024-07-25 19:14:47.735043] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:55.461 [2024-07-25 19:14:47.735047] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.735053] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:55.461 [2024-07-25 19:14:47.735059] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.735067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.461 [2024-07-25 19:14:47.735087] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.461 [2024-07-25 19:14:47.735091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:55.461 [2024-07-25 19:14:47.735096] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:55.461 [2024-07-25 19:14:47.735100] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.735106] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.735112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.461 [2024-07-25 19:14:47.735131] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.461 [2024-07-25 19:14:47.735135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:55.461 [2024-07-25 19:14:47.735140] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:55.461 [2024-07-25 19:14:47.735144] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:55.461 [2024-07-25 19:14:47.735148] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.735153] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:55.461 [2024-07-25 19:14:47.735258] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:55.461 [2024-07-25 19:14:47.735262] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:55.461 [2024-07-25 19:14:47.735270] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.735275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.461 [2024-07-25 19:14:47.735297] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.461 [2024-07-25 19:14:47.735301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:55.461 [2024-07-25 19:14:47.735306] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:55.461 [2024-07-25 19:14:47.735310] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.735316] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x187f00 00:23:55.461 [2024-07-25 19:14:47.735322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.462 [2024-07-25 19:14:47.735345] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.462 [2024-07-25 19:14:47.735349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:23:55.462 [2024-07-25 19:14:47.735354] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:55.462 [2024-07-25 19:14:47.735358] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:55.462 [2024-07-25 19:14:47.735363] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x187f00 00:23:55.462 [2024-07-25 19:14:47.735368] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:55.462 [2024-07-25 19:14:47.735375] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:55.462 [2024-07-25 19:14:47.735383] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x187f00 00:23:55.462 [2024-07-25 19:14:47.735389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x187f00 00:23:55.462 [2024-07-25 19:14:47.735429] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.462 [2024-07-25 19:14:47.735433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:55.462 [2024-07-25 19:14:47.735440] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:55.462 [2024-07-25 19:14:47.735444] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:55.462 [2024-07-25 19:14:47.735448] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:55.462 [2024-07-25 19:14:47.735455] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:55.462 [2024-07-25 19:14:47.735459] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:55.462 [2024-07-25 19:14:47.735463] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:55.462 [2024-07-25 19:14:47.735467] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x187f00 00:23:55.462 [2024-07-25 19:14:47.735473] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:55.462 [2024-07-25 19:14:47.735478] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x187f00 00:23:55.462 [2024-07-25 19:14:47.735485] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.462 [2024-07-25 19:14:47.735503] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.462 [2024-07-25 19:14:47.735507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:55.462 [2024-07-25 19:14:47.735515] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x187f00 00:23:55.462 [2024-07-25 19:14:47.735520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.462 [2024-07-25 19:14:47.735525] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x187f00 00:23:55.462 [2024-07-25 19:14:47.735530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.462 [2024-07-25 19:14:47.735535] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.462 [2024-07-25 19:14:47.735540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.462 [2024-07-25 19:14:47.735546] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x187f00 00:23:55.462 [2024-07-25 19:14:47.735551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.462 [2024-07-25 19:14:47.735557] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:55.462 [2024-07-25 19:14:47.735561] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x187f00 00:23:55.462 [2024-07-25 19:14:47.735567] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:55.462 [2024-07-25 19:14:47.735573] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x187f00 00:23:55.462 [2024-07-25 19:14:47.735579] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.462 [2024-07-25 19:14:47.735601] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.462 [2024-07-25 19:14:47.735606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:23:55.462 [2024-07-25 19:14:47.735611] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:55.462 [2024-07-25 19:14:47.735615] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:55.462 [2024-07-25 19:14:47.735619] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x187f00 00:23:55.462 [2024-07-25 19:14:47.735627] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x187f00 00:23:55.462 [2024-07-25 19:14:47.735632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x187f00 00:23:55.462 [2024-07-25 19:14:47.735655] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.462 [2024-07-25 19:14:47.735659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:55.462 [2024-07-25 19:14:47.735665] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x187f00 00:23:55.462 [2024-07-25 19:14:47.735673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:55.462 [2024-07-25 19:14:47.735691] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x187f00 00:23:55.462 [2024-07-25 19:14:47.735698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x187f00 00:23:55.462 [2024-07-25 19:14:47.735704] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x187f00 00:23:55.462 [2024-07-25 19:14:47.735709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.462 [2024-07-25 19:14:47.735727] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.462 [2024-07-25 19:14:47.735731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:55.462 [2024-07-25 19:14:47.735741] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x187f00 00:23:55.462 [2024-07-25 19:14:47.735747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x187f00 00:23:55.462 [2024-07-25 19:14:47.735751] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x187f00 00:23:55.462 [2024-07-25 19:14:47.735756] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.462 [2024-07-25 19:14:47.735760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:55.462 [2024-07-25 19:14:47.735766] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x187f00 00:23:55.462 [2024-07-25 19:14:47.735779] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.462 [2024-07-25 19:14:47.735783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:55.462 [2024-07-25 19:14:47.735791] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x187f00 00:23:55.462 [2024-07-25 19:14:47.735797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x187f00 00:23:55.462 [2024-07-25 19:14:47.735801] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x187f00 00:23:55.462 [2024-07-25 19:14:47.735827] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.462 [2024-07-25 19:14:47.735831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:55.462 [2024-07-25 19:14:47.735839] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x187f00 00:23:55.462 ===================================================== 00:23:55.462 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:55.462 ===================================================== 00:23:55.462 Controller Capabilities/Features 00:23:55.462 ================================ 00:23:55.462 Vendor ID: 0000 00:23:55.462 Subsystem Vendor ID: 0000 00:23:55.462 Serial Number: .................... 00:23:55.462 Model Number: ........................................ 00:23:55.462 Firmware Version: 24.09 00:23:55.462 Recommended Arb Burst: 0 00:23:55.462 IEEE OUI Identifier: 00 00 00 00:23:55.462 Multi-path I/O 00:23:55.462 May have multiple subsystem ports: No 00:23:55.462 May have multiple controllers: No 00:23:55.462 Associated with SR-IOV VF: No 00:23:55.462 Max Data Transfer Size: 131072 00:23:55.462 Max Number of Namespaces: 0 00:23:55.462 Max Number of I/O Queues: 1024 00:23:55.462 NVMe Specification Version (VS): 1.3 00:23:55.462 NVMe Specification Version (Identify): 1.3 00:23:55.462 Maximum Queue Entries: 128 00:23:55.462 Contiguous Queues Required: Yes 00:23:55.462 Arbitration Mechanisms Supported 00:23:55.462 Weighted Round Robin: Not Supported 00:23:55.462 Vendor Specific: Not Supported 00:23:55.462 Reset Timeout: 15000 ms 00:23:55.462 Doorbell Stride: 4 bytes 00:23:55.462 NVM Subsystem Reset: Not Supported 00:23:55.462 Command Sets Supported 00:23:55.462 NVM Command Set: Supported 00:23:55.462 Boot Partition: Not Supported 00:23:55.462 Memory Page Size Minimum: 4096 bytes 00:23:55.462 Memory Page Size Maximum: 4096 bytes 00:23:55.462 Persistent Memory Region: Not Supported 00:23:55.462 Optional Asynchronous Events Supported 00:23:55.463 Namespace Attribute Notices: Not Supported 00:23:55.463 Firmware Activation Notices: Not Supported 00:23:55.463 ANA Change Notices: Not Supported 00:23:55.463 PLE Aggregate Log Change Notices: Not Supported 00:23:55.463 LBA Status Info Alert Notices: Not Supported 00:23:55.463 EGE Aggregate Log Change Notices: Not Supported 00:23:55.463 Normal NVM Subsystem Shutdown event: Not Supported 00:23:55.463 Zone Descriptor Change Notices: Not Supported 00:23:55.463 Discovery Log Change Notices: Supported 00:23:55.463 Controller Attributes 00:23:55.463 128-bit Host Identifier: Not Supported 00:23:55.463 Non-Operational Permissive Mode: Not Supported 00:23:55.463 NVM Sets: Not Supported 00:23:55.463 Read Recovery Levels: Not Supported 00:23:55.463 Endurance Groups: Not Supported 00:23:55.463 Predictable Latency Mode: Not Supported 00:23:55.463 Traffic Based Keep ALive: Not Supported 00:23:55.463 Namespace Granularity: Not Supported 00:23:55.463 SQ Associations: Not Supported 00:23:55.463 UUID List: Not Supported 00:23:55.463 Multi-Domain Subsystem: Not Supported 00:23:55.463 Fixed Capacity Management: Not Supported 00:23:55.463 Variable Capacity Management: Not Supported 00:23:55.463 Delete Endurance Group: Not Supported 00:23:55.463 Delete NVM Set: Not Supported 00:23:55.463 Extended LBA Formats Supported: Not Supported 00:23:55.463 Flexible Data Placement Supported: Not Supported 00:23:55.463 00:23:55.463 Controller Memory Buffer Support 00:23:55.463 ================================ 00:23:55.463 Supported: No 00:23:55.463 00:23:55.463 Persistent Memory Region Support 00:23:55.463 ================================ 00:23:55.463 Supported: No 00:23:55.463 00:23:55.463 Admin Command Set Attributes 00:23:55.463 ============================ 00:23:55.463 Security Send/Receive: Not Supported 00:23:55.463 Format NVM: Not Supported 00:23:55.463 Firmware Activate/Download: Not Supported 00:23:55.463 Namespace Management: Not Supported 00:23:55.463 Device Self-Test: Not Supported 00:23:55.463 Directives: Not Supported 00:23:55.463 NVMe-MI: Not Supported 00:23:55.463 Virtualization Management: Not Supported 00:23:55.463 Doorbell Buffer Config: Not Supported 00:23:55.463 Get LBA Status Capability: Not Supported 00:23:55.463 Command & Feature Lockdown Capability: Not Supported 00:23:55.463 Abort Command Limit: 1 00:23:55.463 Async Event Request Limit: 4 00:23:55.463 Number of Firmware Slots: N/A 00:23:55.463 Firmware Slot 1 Read-Only: N/A 00:23:55.463 Firmware Activation Without Reset: N/A 00:23:55.463 Multiple Update Detection Support: N/A 00:23:55.463 Firmware Update Granularity: No Information Provided 00:23:55.463 Per-Namespace SMART Log: No 00:23:55.463 Asymmetric Namespace Access Log Page: Not Supported 00:23:55.463 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:55.463 Command Effects Log Page: Not Supported 00:23:55.463 Get Log Page Extended Data: Supported 00:23:55.463 Telemetry Log Pages: Not Supported 00:23:55.463 Persistent Event Log Pages: Not Supported 00:23:55.463 Supported Log Pages Log Page: May Support 00:23:55.463 Commands Supported & Effects Log Page: Not Supported 00:23:55.463 Feature Identifiers & Effects Log Page:May Support 00:23:55.463 NVMe-MI Commands & Effects Log Page: May Support 00:23:55.463 Data Area 4 for Telemetry Log: Not Supported 00:23:55.463 Error Log Page Entries Supported: 128 00:23:55.463 Keep Alive: Not Supported 00:23:55.463 00:23:55.463 NVM Command Set Attributes 00:23:55.463 ========================== 00:23:55.463 Submission Queue Entry Size 00:23:55.463 Max: 1 00:23:55.463 Min: 1 00:23:55.463 Completion Queue Entry Size 00:23:55.463 Max: 1 00:23:55.463 Min: 1 00:23:55.463 Number of Namespaces: 0 00:23:55.463 Compare Command: Not Supported 00:23:55.463 Write Uncorrectable Command: Not Supported 00:23:55.463 Dataset Management Command: Not Supported 00:23:55.463 Write Zeroes Command: Not Supported 00:23:55.463 Set Features Save Field: Not Supported 00:23:55.463 Reservations: Not Supported 00:23:55.463 Timestamp: Not Supported 00:23:55.463 Copy: Not Supported 00:23:55.463 Volatile Write Cache: Not Present 00:23:55.463 Atomic Write Unit (Normal): 1 00:23:55.463 Atomic Write Unit (PFail): 1 00:23:55.463 Atomic Compare & Write Unit: 1 00:23:55.463 Fused Compare & Write: Supported 00:23:55.463 Scatter-Gather List 00:23:55.463 SGL Command Set: Supported 00:23:55.463 SGL Keyed: Supported 00:23:55.463 SGL Bit Bucket Descriptor: Not Supported 00:23:55.463 SGL Metadata Pointer: Not Supported 00:23:55.463 Oversized SGL: Not Supported 00:23:55.463 SGL Metadata Address: Not Supported 00:23:55.463 SGL Offset: Supported 00:23:55.463 Transport SGL Data Block: Not Supported 00:23:55.463 Replay Protected Memory Block: Not Supported 00:23:55.463 00:23:55.463 Firmware Slot Information 00:23:55.463 ========================= 00:23:55.463 Active slot: 0 00:23:55.463 00:23:55.463 00:23:55.463 Error Log 00:23:55.463 ========= 00:23:55.463 00:23:55.463 Active Namespaces 00:23:55.463 ================= 00:23:55.463 Discovery Log Page 00:23:55.463 ================== 00:23:55.463 Generation Counter: 2 00:23:55.463 Number of Records: 2 00:23:55.463 Record Format: 0 00:23:55.463 00:23:55.463 Discovery Log Entry 0 00:23:55.463 ---------------------- 00:23:55.463 Transport Type: 1 (RDMA) 00:23:55.463 Address Family: 1 (IPv4) 00:23:55.463 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:55.463 Entry Flags: 00:23:55.463 Duplicate Returned Information: 1 00:23:55.463 Explicit Persistent Connection Support for Discovery: 1 00:23:55.463 Transport Requirements: 00:23:55.463 Secure Channel: Not Required 00:23:55.463 Port ID: 0 (0x0000) 00:23:55.463 Controller ID: 65535 (0xffff) 00:23:55.463 Admin Max SQ Size: 128 00:23:55.463 Transport Service Identifier: 4420 00:23:55.463 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:55.463 Transport Address: 192.168.100.8 00:23:55.463 Transport Specific Address Subtype - RDMA 00:23:55.463 RDMA QP Service Type: 1 (Reliable Connected) 00:23:55.463 RDMA Provider Type: 1 (No provider specified) 00:23:55.463 RDMA CM Service: 1 (RDMA_CM) 00:23:55.463 Discovery Log Entry 1 00:23:55.463 ---------------------- 00:23:55.463 Transport Type: 1 (RDMA) 00:23:55.463 Address Family: 1 (IPv4) 00:23:55.463 Subsystem Type: 2 (NVM Subsystem) 00:23:55.463 Entry Flags: 00:23:55.463 Duplicate Returned Information: 0 00:23:55.463 Explicit Persistent Connection Support for Discovery: 0 00:23:55.463 Transport Requirements: 00:23:55.463 Secure Channel: Not Required 00:23:55.463 Port ID: 0 (0x0000) 00:23:55.463 Controller ID: 65535 (0xffff) 00:23:55.463 Admin Max SQ Size: [2024-07-25 19:14:47.735912] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:55.463 [2024-07-25 19:14:47.735920] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 20764 doesn't match qid 00:23:55.463 [2024-07-25 19:14:47.735932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32747 cdw0:5 sqhd:cf40 p:0 m:0 dnr:0 00:23:55.463 [2024-07-25 19:14:47.735936] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 20764 doesn't match qid 00:23:55.463 [2024-07-25 19:14:47.735942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32747 cdw0:5 sqhd:cf40 p:0 m:0 dnr:0 00:23:55.463 [2024-07-25 19:14:47.735947] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 20764 doesn't match qid 00:23:55.463 [2024-07-25 19:14:47.735953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32747 cdw0:5 sqhd:cf40 p:0 m:0 dnr:0 00:23:55.463 [2024-07-25 19:14:47.735958] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 20764 doesn't match qid 00:23:55.463 [2024-07-25 19:14:47.735963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32747 cdw0:5 sqhd:cf40 p:0 m:0 dnr:0 00:23:55.463 [2024-07-25 19:14:47.735970] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x187f00 00:23:55.463 [2024-07-25 19:14:47.735977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.463 [2024-07-25 19:14:47.735995] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.464 [2024-07-25 19:14:47.735999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:23:55.464 [2024-07-25 19:14:47.736008] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.464 [2024-07-25 19:14:47.736014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.464 [2024-07-25 19:14:47.736018] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x187f00 00:23:55.464 [2024-07-25 19:14:47.736036] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.464 [2024-07-25 19:14:47.736041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:55.464 [2024-07-25 19:14:47.736045] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:55.464 [2024-07-25 19:14:47.736050] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:55.464 [2024-07-25 19:14:47.736055] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x187f00 00:23:55.464 [2024-07-25 19:14:47.736062] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.464 [2024-07-25 19:14:47.736068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.464 [2024-07-25 19:14:47.736084] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.464 [2024-07-25 19:14:47.736089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:23:55.464 [2024-07-25 19:14:47.736093] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x187f00 00:23:55.464 [2024-07-25 19:14:47.736101] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.464 [2024-07-25 19:14:47.736107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.464 [2024-07-25 19:14:47.736128] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.464 [2024-07-25 19:14:47.736132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:23:55.464 [2024-07-25 19:14:47.736137] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x187f00 00:23:55.464 [2024-07-25 19:14:47.736144] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.464 [2024-07-25 19:14:47.736151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.464 [2024-07-25 19:14:47.736171] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.464 [2024-07-25 19:14:47.736175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:23:55.464 [2024-07-25 19:14:47.736180] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x187f00 00:23:55.464 [2024-07-25 19:14:47.736186] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.464 [2024-07-25 19:14:47.736192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.464 [2024-07-25 19:14:47.736212] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.464 [2024-07-25 19:14:47.736216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:23:55.464 [2024-07-25 19:14:47.736221] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x187f00 00:23:55.464 [2024-07-25 19:14:47.736228] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.464 [2024-07-25 19:14:47.736234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.464 [2024-07-25 19:14:47.736253] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.464 [2024-07-25 19:14:47.736258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:23:55.464 [2024-07-25 19:14:47.736263] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x187f00 00:23:55.464 [2024-07-25 19:14:47.736270] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.464 [2024-07-25 19:14:47.736276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.464 [2024-07-25 19:14:47.736297] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.464 [2024-07-25 19:14:47.736302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:23:55.464 [2024-07-25 19:14:47.736309] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x187f00 00:23:55.464 [2024-07-25 19:14:47.736316] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.464 [2024-07-25 19:14:47.736322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.464 [2024-07-25 19:14:47.736340] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.464 [2024-07-25 19:14:47.736344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:23:55.464 [2024-07-25 19:14:47.736349] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x187f00 00:23:55.464 [2024-07-25 19:14:47.736356] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.464 [2024-07-25 19:14:47.736362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.464 [2024-07-25 19:14:47.736383] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.464 [2024-07-25 19:14:47.736387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:23:55.464 [2024-07-25 19:14:47.736392] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x187f00 00:23:55.464 [2024-07-25 19:14:47.736399] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.464 [2024-07-25 19:14:47.736405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.464 [2024-07-25 19:14:47.736425] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.464 [2024-07-25 19:14:47.736429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:23:55.464 [2024-07-25 19:14:47.736434] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x187f00 00:23:55.464 [2024-07-25 19:14:47.736440] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.464 [2024-07-25 19:14:47.736446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.464 [2024-07-25 19:14:47.736462] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.464 [2024-07-25 19:14:47.736467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:23:55.464 [2024-07-25 19:14:47.736471] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x187f00 00:23:55.464 [2024-07-25 19:14:47.736478] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.464 [2024-07-25 19:14:47.736484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.464 [2024-07-25 19:14:47.736502] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.464 [2024-07-25 19:14:47.736506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:23:55.464 [2024-07-25 19:14:47.736510] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x187f00 00:23:55.464 [2024-07-25 19:14:47.736517] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.464 [2024-07-25 19:14:47.736523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.464 [2024-07-25 19:14:47.736546] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.464 [2024-07-25 19:14:47.736550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:23:55.464 [2024-07-25 19:14:47.736556] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x187f00 00:23:55.464 [2024-07-25 19:14:47.736563] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.464 [2024-07-25 19:14:47.736569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.464 [2024-07-25 19:14:47.736588] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.464 [2024-07-25 19:14:47.736593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:23:55.464 [2024-07-25 19:14:47.736597] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x187f00 00:23:55.464 [2024-07-25 19:14:47.736604] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.464 [2024-07-25 19:14:47.736610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.464 [2024-07-25 19:14:47.736631] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.464 [2024-07-25 19:14:47.736635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:23:55.464 [2024-07-25 19:14:47.736639] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x187f00 00:23:55.464 [2024-07-25 19:14:47.736646] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.464 [2024-07-25 19:14:47.736652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.464 [2024-07-25 19:14:47.736672] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.464 [2024-07-25 19:14:47.736676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:23:55.464 [2024-07-25 19:14:47.736681] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x187f00 00:23:55.464 [2024-07-25 19:14:47.736687] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.464 [2024-07-25 19:14:47.736693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.464 [2024-07-25 19:14:47.736714] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.464 [2024-07-25 19:14:47.736718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:55.465 [2024-07-25 19:14:47.736723] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x187f00 00:23:55.465 [2024-07-25 19:14:47.736730] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.465 [2024-07-25 19:14:47.736736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.465 [2024-07-25 19:14:47.736753] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.465 [2024-07-25 19:14:47.736757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:23:55.465 [2024-07-25 19:14:47.736762] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x187f00 00:23:55.465 [2024-07-25 19:14:47.736769] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.465 [2024-07-25 19:14:47.736775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.465 [2024-07-25 19:14:47.736794] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.465 [2024-07-25 19:14:47.736799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:23:55.465 [2024-07-25 19:14:47.736804] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x187f00 00:23:55.465 [2024-07-25 19:14:47.736811] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.465 [2024-07-25 19:14:47.736817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.465 [2024-07-25 19:14:47.736836] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.465 [2024-07-25 19:14:47.736840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:23:55.465 [2024-07-25 19:14:47.736845] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x187f00 00:23:55.465 [2024-07-25 19:14:47.736851] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.465 [2024-07-25 19:14:47.736857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.465 [2024-07-25 19:14:47.736876] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.465 [2024-07-25 19:14:47.736880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:23:55.465 [2024-07-25 19:14:47.736885] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x187f00 00:23:55.465 [2024-07-25 19:14:47.736892] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.465 [2024-07-25 19:14:47.736898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.465 [2024-07-25 19:14:47.736922] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.465 [2024-07-25 19:14:47.736926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:23:55.465 [2024-07-25 19:14:47.736931] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x187f00 00:23:55.465 [2024-07-25 19:14:47.736937] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.465 [2024-07-25 19:14:47.736943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.465 [2024-07-25 19:14:47.736961] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.465 [2024-07-25 19:14:47.736965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:23:55.465 [2024-07-25 19:14:47.736969] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x187f00 00:23:55.465 [2024-07-25 19:14:47.736976] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.465 [2024-07-25 19:14:47.736982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.465 [2024-07-25 19:14:47.737001] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.465 [2024-07-25 19:14:47.737006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:23:55.465 [2024-07-25 19:14:47.737010] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x187f00 00:23:55.465 [2024-07-25 19:14:47.737017] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.465 [2024-07-25 19:14:47.737023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.465 [2024-07-25 19:14:47.737042] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.465 [2024-07-25 19:14:47.737046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:23:55.465 [2024-07-25 19:14:47.737051] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x187f00 00:23:55.465 [2024-07-25 19:14:47.737058] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.465 [2024-07-25 19:14:47.737064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.465 [2024-07-25 19:14:47.737083] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.465 [2024-07-25 19:14:47.737087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:23:55.465 [2024-07-25 19:14:47.737091] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x187f00 00:23:55.465 [2024-07-25 19:14:47.737098] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.465 [2024-07-25 19:14:47.737104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.465 [2024-07-25 19:14:47.737118] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.465 [2024-07-25 19:14:47.737123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:23:55.465 [2024-07-25 19:14:47.737127] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x187f00 00:23:55.465 [2024-07-25 19:14:47.737134] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.465 [2024-07-25 19:14:47.737140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.465 [2024-07-25 19:14:47.737156] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.465 [2024-07-25 19:14:47.737160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:23:55.465 [2024-07-25 19:14:47.737164] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x187f00 00:23:55.465 [2024-07-25 19:14:47.737171] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.465 [2024-07-25 19:14:47.737177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.465 [2024-07-25 19:14:47.737193] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.465 [2024-07-25 19:14:47.737197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:23:55.465 [2024-07-25 19:14:47.737202] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x187f00 00:23:55.465 [2024-07-25 19:14:47.737209] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.465 [2024-07-25 19:14:47.737214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.465 [2024-07-25 19:14:47.737232] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.465 [2024-07-25 19:14:47.737236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:23:55.465 [2024-07-25 19:14:47.737241] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x187f00 00:23:55.465 [2024-07-25 19:14:47.737248] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.465 [2024-07-25 19:14:47.737253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.466 [2024-07-25 19:14:47.737273] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.466 [2024-07-25 19:14:47.737277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:23:55.466 [2024-07-25 19:14:47.737281] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.737288] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.737294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.466 [2024-07-25 19:14:47.737313] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.466 [2024-07-25 19:14:47.737317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:23:55.466 [2024-07-25 19:14:47.737322] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.737329] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.737335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.466 [2024-07-25 19:14:47.737357] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.466 [2024-07-25 19:14:47.737361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:23:55.466 [2024-07-25 19:14:47.737366] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.737373] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.737379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.466 [2024-07-25 19:14:47.737399] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.466 [2024-07-25 19:14:47.737403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:23:55.466 [2024-07-25 19:14:47.737408] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.737415] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.737421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.466 [2024-07-25 19:14:47.737443] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.466 [2024-07-25 19:14:47.737447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:23:55.466 [2024-07-25 19:14:47.737452] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.737458] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.737464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.466 [2024-07-25 19:14:47.737488] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.466 [2024-07-25 19:14:47.737492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:23:55.466 [2024-07-25 19:14:47.737497] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.737504] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.737510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.466 [2024-07-25 19:14:47.737530] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.466 [2024-07-25 19:14:47.737534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:23:55.466 [2024-07-25 19:14:47.737539] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.737546] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.737552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.466 [2024-07-25 19:14:47.737569] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.466 [2024-07-25 19:14:47.737573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:23:55.466 [2024-07-25 19:14:47.737578] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.737585] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.737591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.466 [2024-07-25 19:14:47.737607] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.466 [2024-07-25 19:14:47.737611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:23:55.466 [2024-07-25 19:14:47.737616] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.737622] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.737628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.466 [2024-07-25 19:14:47.737652] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.466 [2024-07-25 19:14:47.737657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:23:55.466 [2024-07-25 19:14:47.737661] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.737668] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.737674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.466 [2024-07-25 19:14:47.737691] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.466 [2024-07-25 19:14:47.737696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:23:55.466 [2024-07-25 19:14:47.737700] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.737707] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.737713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.466 [2024-07-25 19:14:47.737732] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.466 [2024-07-25 19:14:47.737736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:23:55.466 [2024-07-25 19:14:47.737741] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.737747] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.737755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.466 [2024-07-25 19:14:47.737774] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.466 [2024-07-25 19:14:47.737778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:23:55.466 [2024-07-25 19:14:47.737783] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.737789] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.737795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.466 [2024-07-25 19:14:47.737814] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.466 [2024-07-25 19:14:47.737819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:23:55.466 [2024-07-25 19:14:47.737823] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.737830] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.737836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.466 [2024-07-25 19:14:47.737857] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.466 [2024-07-25 19:14:47.737861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:23:55.466 [2024-07-25 19:14:47.737865] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.737872] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.737878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.466 [2024-07-25 19:14:47.737895] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.466 [2024-07-25 19:14:47.737902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:23:55.466 [2024-07-25 19:14:47.737907] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.737914] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.737920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.466 [2024-07-25 19:14:47.737942] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.466 [2024-07-25 19:14:47.737946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:23:55.466 [2024-07-25 19:14:47.737951] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.737958] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.737963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.466 [2024-07-25 19:14:47.737986] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.466 [2024-07-25 19:14:47.737990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:23:55.466 [2024-07-25 19:14:47.737995] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x187f00 00:23:55.466 [2024-07-25 19:14:47.738001] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.467 [2024-07-25 19:14:47.738028] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.467 [2024-07-25 19:14:47.738032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:55.467 [2024-07-25 19:14:47.738037] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738044] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.467 [2024-07-25 19:14:47.738067] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.467 [2024-07-25 19:14:47.738071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:23:55.467 [2024-07-25 19:14:47.738076] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738082] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.467 [2024-07-25 19:14:47.738104] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.467 [2024-07-25 19:14:47.738108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:23:55.467 [2024-07-25 19:14:47.738113] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738120] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.467 [2024-07-25 19:14:47.738141] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.467 [2024-07-25 19:14:47.738145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:23:55.467 [2024-07-25 19:14:47.738150] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738157] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.467 [2024-07-25 19:14:47.738180] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.467 [2024-07-25 19:14:47.738184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:23:55.467 [2024-07-25 19:14:47.738189] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738196] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.467 [2024-07-25 19:14:47.738222] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.467 [2024-07-25 19:14:47.738227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:23:55.467 [2024-07-25 19:14:47.738231] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738242] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.467 [2024-07-25 19:14:47.738265] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.467 [2024-07-25 19:14:47.738269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:23:55.467 [2024-07-25 19:14:47.738274] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738281] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.467 [2024-07-25 19:14:47.738304] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.467 [2024-07-25 19:14:47.738308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:23:55.467 [2024-07-25 19:14:47.738313] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738320] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.467 [2024-07-25 19:14:47.738345] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.467 [2024-07-25 19:14:47.738349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:23:55.467 [2024-07-25 19:14:47.738354] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738360] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.467 [2024-07-25 19:14:47.738382] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.467 [2024-07-25 19:14:47.738386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:23:55.467 [2024-07-25 19:14:47.738391] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738398] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.467 [2024-07-25 19:14:47.738420] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.467 [2024-07-25 19:14:47.738424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:23:55.467 [2024-07-25 19:14:47.738429] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738435] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.467 [2024-07-25 19:14:47.738460] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.467 [2024-07-25 19:14:47.738465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:23:55.467 [2024-07-25 19:14:47.738469] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738478] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.467 [2024-07-25 19:14:47.738503] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.467 [2024-07-25 19:14:47.738507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:23:55.467 [2024-07-25 19:14:47.738512] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738518] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.467 [2024-07-25 19:14:47.738544] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.467 [2024-07-25 19:14:47.738548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:23:55.467 [2024-07-25 19:14:47.738552] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738559] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.467 [2024-07-25 19:14:47.738584] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.467 [2024-07-25 19:14:47.738588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:23:55.467 [2024-07-25 19:14:47.738593] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738600] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.467 [2024-07-25 19:14:47.738623] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.467 [2024-07-25 19:14:47.738627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:23:55.467 [2024-07-25 19:14:47.738632] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738638] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.467 [2024-07-25 19:14:47.738664] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.467 [2024-07-25 19:14:47.738668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:23:55.467 [2024-07-25 19:14:47.738672] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738679] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.467 [2024-07-25 19:14:47.738685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.467 [2024-07-25 19:14:47.738708] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.467 [2024-07-25 19:14:47.738712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:23:55.468 [2024-07-25 19:14:47.738718] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.738725] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.738731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.468 [2024-07-25 19:14:47.738751] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.468 [2024-07-25 19:14:47.738756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:23:55.468 [2024-07-25 19:14:47.738760] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.738767] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.738773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.468 [2024-07-25 19:14:47.738792] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.468 [2024-07-25 19:14:47.738796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:23:55.468 [2024-07-25 19:14:47.738801] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.738807] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.738813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.468 [2024-07-25 19:14:47.738834] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.468 [2024-07-25 19:14:47.738838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:23:55.468 [2024-07-25 19:14:47.738843] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.738849] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.738855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.468 [2024-07-25 19:14:47.738876] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.468 [2024-07-25 19:14:47.738881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:23:55.468 [2024-07-25 19:14:47.738885] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.738892] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.738898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.468 [2024-07-25 19:14:47.742911] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.468 [2024-07-25 19:14:47.742916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:23:55.468 [2024-07-25 19:14:47.742920] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.742927] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.742933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.468 [2024-07-25 19:14:47.742951] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.468 [2024-07-25 19:14:47.742955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0018 p:0 m:0 dnr:0 00:23:55.468 [2024-07-25 19:14:47.742962] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.742967] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:23:55.468 128 00:23:55.468 Transport Service Identifier: 4420 00:23:55.468 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:55.468 Transport Address: 192.168.100.8 00:23:55.468 Transport Specific Address Subtype - RDMA 00:23:55.468 RDMA QP Service Type: 1 (Reliable Connected) 00:23:55.468 RDMA Provider Type: 1 (No provider specified) 00:23:55.468 RDMA CM Service: 1 (RDMA_CM) 00:23:55.468 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:55.468 [2024-07-25 19:14:47.812935] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:55.468 [2024-07-25 19:14:47.812968] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid854691 ] 00:23:55.468 EAL: No free 2048 kB hugepages reported on node 1 00:23:55.468 [2024-07-25 19:14:47.854027] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:55.468 [2024-07-25 19:14:47.854098] nvme_rdma.c:2192:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:23:55.468 [2024-07-25 19:14:47.854110] nvme_rdma.c:1211:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:23:55.468 [2024-07-25 19:14:47.854114] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:23:55.468 [2024-07-25 19:14:47.854137] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:55.468 [2024-07-25 19:14:47.872515] nvme_rdma.c: 430:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:23:55.468 [2024-07-25 19:14:47.883050] nvme_rdma.c:1100:nvme_rdma_connect_established: *DEBUG*: rc =0 00:23:55.468 [2024-07-25 19:14:47.883059] nvme_rdma.c:1105:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:23:55.468 [2024-07-25 19:14:47.883065] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.883071] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.883075] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.883080] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.883084] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.883089] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.883093] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.883098] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.883102] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.883106] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.883114] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.883118] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.883123] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.883127] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.883131] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.883136] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.883140] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.883145] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.883149] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.883154] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.883158] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.883162] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.883167] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.883171] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.883176] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.883180] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.883184] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.883189] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.883193] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.883198] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.883202] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x187f00 00:23:55.468 [2024-07-25 19:14:47.883206] nvme_rdma.c:1119:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:23:55.468 [2024-07-25 19:14:47.883210] nvme_rdma.c:1122:nvme_rdma_connect_established: *DEBUG*: rc =0 00:23:55.468 [2024-07-25 19:14:47.883213] nvme_rdma.c:1127:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:23:55.469 [2024-07-25 19:14:47.883225] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x187f00 00:23:55.469 [2024-07-25 19:14:47.883236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x187f00 00:23:55.469 [2024-07-25 19:14:47.887905] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.469 [2024-07-25 19:14:47.887913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:55.469 [2024-07-25 19:14:47.887919] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x187f00 00:23:55.469 [2024-07-25 19:14:47.887926] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:55.469 [2024-07-25 19:14:47.887932] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:55.469 [2024-07-25 19:14:47.887937] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:55.469 [2024-07-25 19:14:47.887951] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x187f00 00:23:55.469 [2024-07-25 19:14:47.887958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.469 [2024-07-25 19:14:47.887981] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.469 [2024-07-25 19:14:47.887986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:23:55.469 [2024-07-25 19:14:47.887991] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:55.469 [2024-07-25 19:14:47.887995] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x187f00 00:23:55.469 [2024-07-25 19:14:47.888001] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:55.469 [2024-07-25 19:14:47.888007] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x187f00 00:23:55.469 [2024-07-25 19:14:47.888013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.469 [2024-07-25 19:14:47.888034] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.469 [2024-07-25 19:14:47.888039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:23:55.469 [2024-07-25 19:14:47.888044] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:55.469 [2024-07-25 19:14:47.888048] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x187f00 00:23:55.469 [2024-07-25 19:14:47.888053] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:55.469 [2024-07-25 19:14:47.888059] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x187f00 00:23:55.469 [2024-07-25 19:14:47.888065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.469 [2024-07-25 19:14:47.888085] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.469 [2024-07-25 19:14:47.888089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:55.469 [2024-07-25 19:14:47.888094] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:55.469 [2024-07-25 19:14:47.888098] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x187f00 00:23:55.469 [2024-07-25 19:14:47.888105] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x187f00 00:23:55.469 [2024-07-25 19:14:47.888111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.469 [2024-07-25 19:14:47.888129] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.469 [2024-07-25 19:14:47.888133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:55.469 [2024-07-25 19:14:47.888138] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:55.469 [2024-07-25 19:14:47.888142] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:55.469 [2024-07-25 19:14:47.888146] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x187f00 00:23:55.469 [2024-07-25 19:14:47.888151] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:55.469 [2024-07-25 19:14:47.888257] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:55.469 [2024-07-25 19:14:47.888261] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:55.469 [2024-07-25 19:14:47.888268] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x187f00 00:23:55.469 [2024-07-25 19:14:47.888274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.469 [2024-07-25 19:14:47.888290] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.469 [2024-07-25 19:14:47.888295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:55.469 [2024-07-25 19:14:47.888299] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:55.469 [2024-07-25 19:14:47.888303] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x187f00 00:23:55.469 [2024-07-25 19:14:47.888310] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x187f00 00:23:55.469 [2024-07-25 19:14:47.888316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.469 [2024-07-25 19:14:47.888338] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.469 [2024-07-25 19:14:47.888342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:23:55.469 [2024-07-25 19:14:47.888346] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:55.469 [2024-07-25 19:14:47.888350] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:55.469 [2024-07-25 19:14:47.888354] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x187f00 00:23:55.469 [2024-07-25 19:14:47.888359] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:55.469 [2024-07-25 19:14:47.888369] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:55.469 [2024-07-25 19:14:47.888377] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x187f00 00:23:55.469 [2024-07-25 19:14:47.888383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x187f00 00:23:55.469 [2024-07-25 19:14:47.888421] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.469 [2024-07-25 19:14:47.888425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:55.469 [2024-07-25 19:14:47.888432] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:55.469 [2024-07-25 19:14:47.888436] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:55.469 [2024-07-25 19:14:47.888440] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:55.469 [2024-07-25 19:14:47.888445] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:55.469 [2024-07-25 19:14:47.888449] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:55.469 [2024-07-25 19:14:47.888454] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:55.469 [2024-07-25 19:14:47.888458] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x187f00 00:23:55.469 [2024-07-25 19:14:47.888465] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:55.469 [2024-07-25 19:14:47.888470] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x187f00 00:23:55.469 [2024-07-25 19:14:47.888477] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.469 [2024-07-25 19:14:47.888494] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.469 [2024-07-25 19:14:47.888499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:55.469 [2024-07-25 19:14:47.888505] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x187f00 00:23:55.469 [2024-07-25 19:14:47.888510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.469 [2024-07-25 19:14:47.888516] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x187f00 00:23:55.469 [2024-07-25 19:14:47.888521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.469 [2024-07-25 19:14:47.888526] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.469 [2024-07-25 19:14:47.888532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.469 [2024-07-25 19:14:47.888537] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x187f00 00:23:55.469 [2024-07-25 19:14:47.888542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.469 [2024-07-25 19:14:47.888546] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:55.469 [2024-07-25 19:14:47.888550] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x187f00 00:23:55.469 [2024-07-25 19:14:47.888557] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:55.469 [2024-07-25 19:14:47.888562] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x187f00 00:23:55.469 [2024-07-25 19:14:47.888568] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.469 [2024-07-25 19:14:47.888585] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.469 [2024-07-25 19:14:47.888590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:23:55.470 [2024-07-25 19:14:47.888594] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:55.470 [2024-07-25 19:14:47.888599] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:55.470 [2024-07-25 19:14:47.888603] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x187f00 00:23:55.470 [2024-07-25 19:14:47.888608] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:55.470 [2024-07-25 19:14:47.888614] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:55.470 [2024-07-25 19:14:47.888620] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x187f00 00:23:55.470 [2024-07-25 19:14:47.888626] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.470 [2024-07-25 19:14:47.888643] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.470 [2024-07-25 19:14:47.888647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:23:55.470 [2024-07-25 19:14:47.888698] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:55.470 [2024-07-25 19:14:47.888703] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x187f00 00:23:55.470 [2024-07-25 19:14:47.888709] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:55.470 [2024-07-25 19:14:47.888716] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x187f00 00:23:55.470 [2024-07-25 19:14:47.888722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x187f00 00:23:55.470 [2024-07-25 19:14:47.888751] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.470 [2024-07-25 19:14:47.888755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:55.470 [2024-07-25 19:14:47.888766] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:55.470 [2024-07-25 19:14:47.888774] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:55.470 [2024-07-25 19:14:47.888778] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x187f00 00:23:55.470 [2024-07-25 19:14:47.888784] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:55.470 [2024-07-25 19:14:47.888791] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x187f00 00:23:55.470 [2024-07-25 19:14:47.888797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x187f00 00:23:55.470 [2024-07-25 19:14:47.888831] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.470 [2024-07-25 19:14:47.888835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:55.470 [2024-07-25 19:14:47.888845] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:55.470 [2024-07-25 19:14:47.888850] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x187f00 00:23:55.470 [2024-07-25 19:14:47.888856] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:55.470 [2024-07-25 19:14:47.888863] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x187f00 00:23:55.470 [2024-07-25 19:14:47.888868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x187f00 00:23:55.470 [2024-07-25 19:14:47.888893] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.470 [2024-07-25 19:14:47.888897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:55.470 [2024-07-25 19:14:47.888908] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:55.470 [2024-07-25 19:14:47.888912] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x187f00 00:23:55.470 [2024-07-25 19:14:47.888919] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:55.470 [2024-07-25 19:14:47.888928] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:55.470 [2024-07-25 19:14:47.888934] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:55.470 [2024-07-25 19:14:47.888938] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:55.470 [2024-07-25 19:14:47.888943] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:55.470 [2024-07-25 19:14:47.888947] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:55.470 [2024-07-25 19:14:47.888951] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:55.470 [2024-07-25 19:14:47.888956] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:55.470 [2024-07-25 19:14:47.888968] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x187f00 00:23:55.470 [2024-07-25 19:14:47.888974] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.470 [2024-07-25 19:14:47.888980] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x187f00 00:23:55.470 [2024-07-25 19:14:47.888986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.470 [2024-07-25 19:14:47.888996] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.470 [2024-07-25 19:14:47.889001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:55.470 [2024-07-25 19:14:47.889005] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x187f00 00:23:55.470 [2024-07-25 19:14:47.889012] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x187f00 00:23:55.470 [2024-07-25 19:14:47.889018] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:0 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.470 [2024-07-25 19:14:47.889024] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.470 [2024-07-25 19:14:47.889028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:55.470 [2024-07-25 19:14:47.889033] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x187f00 00:23:55.470 [2024-07-25 19:14:47.889041] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.470 [2024-07-25 19:14:47.889046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:55.470 [2024-07-25 19:14:47.889050] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x187f00 00:23:55.470 [2024-07-25 19:14:47.889057] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x187f00 00:23:55.470 [2024-07-25 19:14:47.889063] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:0 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.470 [2024-07-25 19:14:47.889083] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.470 [2024-07-25 19:14:47.889088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:55.470 [2024-07-25 19:14:47.889092] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x187f00 00:23:55.470 [2024-07-25 19:14:47.889100] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x187f00 00:23:55.470 [2024-07-25 19:14:47.889106] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.470 [2024-07-25 19:14:47.889125] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.470 [2024-07-25 19:14:47.889129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:23:55.470 [2024-07-25 19:14:47.889134] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x187f00 00:23:55.470 [2024-07-25 19:14:47.889146] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x187f00 00:23:55.471 [2024-07-25 19:14:47.889152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x187f00 00:23:55.471 [2024-07-25 19:14:47.889159] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x187f00 00:23:55.471 [2024-07-25 19:14:47.889165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x187f00 00:23:55.471 [2024-07-25 19:14:47.889172] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x187f00 00:23:55.471 [2024-07-25 19:14:47.889178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x187f00 00:23:55.471 [2024-07-25 19:14:47.889185] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x187f00 00:23:55.471 [2024-07-25 19:14:47.889190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x187f00 00:23:55.471 [2024-07-25 19:14:47.889197] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.471 [2024-07-25 19:14:47.889201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:55.471 [2024-07-25 19:14:47.889210] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x187f00 00:23:55.471 [2024-07-25 19:14:47.889215] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.471 [2024-07-25 19:14:47.889219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:55.471 [2024-07-25 19:14:47.889227] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x187f00 00:23:55.471 [2024-07-25 19:14:47.889231] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.471 [2024-07-25 19:14:47.889235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:55.471 [2024-07-25 19:14:47.889241] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x187f00 00:23:55.471 [2024-07-25 19:14:47.889252] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.471 [2024-07-25 19:14:47.889256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:55.471 [2024-07-25 19:14:47.889263] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x187f00 00:23:55.471 ===================================================== 00:23:55.471 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:55.471 ===================================================== 00:23:55.471 Controller Capabilities/Features 00:23:55.471 ================================ 00:23:55.471 Vendor ID: 8086 00:23:55.471 Subsystem Vendor ID: 8086 00:23:55.471 Serial Number: SPDK00000000000001 00:23:55.471 Model Number: SPDK bdev Controller 00:23:55.471 Firmware Version: 24.09 00:23:55.471 Recommended Arb Burst: 6 00:23:55.471 IEEE OUI Identifier: e4 d2 5c 00:23:55.471 Multi-path I/O 00:23:55.471 May have multiple subsystem ports: Yes 00:23:55.471 May have multiple controllers: Yes 00:23:55.471 Associated with SR-IOV VF: No 00:23:55.471 Max Data Transfer Size: 131072 00:23:55.471 Max Number of Namespaces: 32 00:23:55.471 Max Number of I/O Queues: 127 00:23:55.471 NVMe Specification Version (VS): 1.3 00:23:55.471 NVMe Specification Version (Identify): 1.3 00:23:55.471 Maximum Queue Entries: 128 00:23:55.471 Contiguous Queues Required: Yes 00:23:55.471 Arbitration Mechanisms Supported 00:23:55.471 Weighted Round Robin: Not Supported 00:23:55.471 Vendor Specific: Not Supported 00:23:55.471 Reset Timeout: 15000 ms 00:23:55.471 Doorbell Stride: 4 bytes 00:23:55.471 NVM Subsystem Reset: Not Supported 00:23:55.471 Command Sets Supported 00:23:55.471 NVM Command Set: Supported 00:23:55.471 Boot Partition: Not Supported 00:23:55.471 Memory Page Size Minimum: 4096 bytes 00:23:55.471 Memory Page Size Maximum: 4096 bytes 00:23:55.471 Persistent Memory Region: Not Supported 00:23:55.471 Optional Asynchronous Events Supported 00:23:55.471 Namespace Attribute Notices: Supported 00:23:55.471 Firmware Activation Notices: Not Supported 00:23:55.471 ANA Change Notices: Not Supported 00:23:55.471 PLE Aggregate Log Change Notices: Not Supported 00:23:55.471 LBA Status Info Alert Notices: Not Supported 00:23:55.471 EGE Aggregate Log Change Notices: Not Supported 00:23:55.471 Normal NVM Subsystem Shutdown event: Not Supported 00:23:55.471 Zone Descriptor Change Notices: Not Supported 00:23:55.471 Discovery Log Change Notices: Not Supported 00:23:55.471 Controller Attributes 00:23:55.471 128-bit Host Identifier: Supported 00:23:55.471 Non-Operational Permissive Mode: Not Supported 00:23:55.471 NVM Sets: Not Supported 00:23:55.471 Read Recovery Levels: Not Supported 00:23:55.471 Endurance Groups: Not Supported 00:23:55.471 Predictable Latency Mode: Not Supported 00:23:55.471 Traffic Based Keep ALive: Not Supported 00:23:55.471 Namespace Granularity: Not Supported 00:23:55.471 SQ Associations: Not Supported 00:23:55.471 UUID List: Not Supported 00:23:55.471 Multi-Domain Subsystem: Not Supported 00:23:55.471 Fixed Capacity Management: Not Supported 00:23:55.471 Variable Capacity Management: Not Supported 00:23:55.471 Delete Endurance Group: Not Supported 00:23:55.471 Delete NVM Set: Not Supported 00:23:55.471 Extended LBA Formats Supported: Not Supported 00:23:55.471 Flexible Data Placement Supported: Not Supported 00:23:55.471 00:23:55.471 Controller Memory Buffer Support 00:23:55.471 ================================ 00:23:55.471 Supported: No 00:23:55.471 00:23:55.471 Persistent Memory Region Support 00:23:55.471 ================================ 00:23:55.471 Supported: No 00:23:55.471 00:23:55.471 Admin Command Set Attributes 00:23:55.471 ============================ 00:23:55.471 Security Send/Receive: Not Supported 00:23:55.471 Format NVM: Not Supported 00:23:55.471 Firmware Activate/Download: Not Supported 00:23:55.471 Namespace Management: Not Supported 00:23:55.471 Device Self-Test: Not Supported 00:23:55.471 Directives: Not Supported 00:23:55.471 NVMe-MI: Not Supported 00:23:55.471 Virtualization Management: Not Supported 00:23:55.471 Doorbell Buffer Config: Not Supported 00:23:55.471 Get LBA Status Capability: Not Supported 00:23:55.471 Command & Feature Lockdown Capability: Not Supported 00:23:55.471 Abort Command Limit: 4 00:23:55.471 Async Event Request Limit: 4 00:23:55.471 Number of Firmware Slots: N/A 00:23:55.471 Firmware Slot 1 Read-Only: N/A 00:23:55.471 Firmware Activation Without Reset: N/A 00:23:55.471 Multiple Update Detection Support: N/A 00:23:55.471 Firmware Update Granularity: No Information Provided 00:23:55.471 Per-Namespace SMART Log: No 00:23:55.471 Asymmetric Namespace Access Log Page: Not Supported 00:23:55.471 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:55.471 Command Effects Log Page: Supported 00:23:55.471 Get Log Page Extended Data: Supported 00:23:55.471 Telemetry Log Pages: Not Supported 00:23:55.471 Persistent Event Log Pages: Not Supported 00:23:55.471 Supported Log Pages Log Page: May Support 00:23:55.471 Commands Supported & Effects Log Page: Not Supported 00:23:55.471 Feature Identifiers & Effects Log Page:May Support 00:23:55.471 NVMe-MI Commands & Effects Log Page: May Support 00:23:55.471 Data Area 4 for Telemetry Log: Not Supported 00:23:55.471 Error Log Page Entries Supported: 128 00:23:55.471 Keep Alive: Supported 00:23:55.471 Keep Alive Granularity: 10000 ms 00:23:55.471 00:23:55.471 NVM Command Set Attributes 00:23:55.471 ========================== 00:23:55.471 Submission Queue Entry Size 00:23:55.471 Max: 64 00:23:55.471 Min: 64 00:23:55.471 Completion Queue Entry Size 00:23:55.471 Max: 16 00:23:55.471 Min: 16 00:23:55.471 Number of Namespaces: 32 00:23:55.471 Compare Command: Supported 00:23:55.471 Write Uncorrectable Command: Not Supported 00:23:55.471 Dataset Management Command: Supported 00:23:55.471 Write Zeroes Command: Supported 00:23:55.471 Set Features Save Field: Not Supported 00:23:55.471 Reservations: Supported 00:23:55.471 Timestamp: Not Supported 00:23:55.471 Copy: Supported 00:23:55.471 Volatile Write Cache: Present 00:23:55.471 Atomic Write Unit (Normal): 1 00:23:55.471 Atomic Write Unit (PFail): 1 00:23:55.471 Atomic Compare & Write Unit: 1 00:23:55.471 Fused Compare & Write: Supported 00:23:55.471 Scatter-Gather List 00:23:55.471 SGL Command Set: Supported 00:23:55.471 SGL Keyed: Supported 00:23:55.471 SGL Bit Bucket Descriptor: Not Supported 00:23:55.471 SGL Metadata Pointer: Not Supported 00:23:55.471 Oversized SGL: Not Supported 00:23:55.471 SGL Metadata Address: Not Supported 00:23:55.471 SGL Offset: Supported 00:23:55.471 Transport SGL Data Block: Not Supported 00:23:55.471 Replay Protected Memory Block: Not Supported 00:23:55.471 00:23:55.471 Firmware Slot Information 00:23:55.471 ========================= 00:23:55.471 Active slot: 1 00:23:55.471 Slot 1 Firmware Revision: 24.09 00:23:55.471 00:23:55.471 00:23:55.472 Commands Supported and Effects 00:23:55.472 ============================== 00:23:55.472 Admin Commands 00:23:55.472 -------------- 00:23:55.472 Get Log Page (02h): Supported 00:23:55.472 Identify (06h): Supported 00:23:55.472 Abort (08h): Supported 00:23:55.472 Set Features (09h): Supported 00:23:55.472 Get Features (0Ah): Supported 00:23:55.472 Asynchronous Event Request (0Ch): Supported 00:23:55.472 Keep Alive (18h): Supported 00:23:55.472 I/O Commands 00:23:55.472 ------------ 00:23:55.472 Flush (00h): Supported LBA-Change 00:23:55.472 Write (01h): Supported LBA-Change 00:23:55.472 Read (02h): Supported 00:23:55.472 Compare (05h): Supported 00:23:55.472 Write Zeroes (08h): Supported LBA-Change 00:23:55.472 Dataset Management (09h): Supported LBA-Change 00:23:55.472 Copy (19h): Supported LBA-Change 00:23:55.472 00:23:55.472 Error Log 00:23:55.472 ========= 00:23:55.472 00:23:55.472 Arbitration 00:23:55.472 =========== 00:23:55.472 Arbitration Burst: 1 00:23:55.472 00:23:55.472 Power Management 00:23:55.472 ================ 00:23:55.472 Number of Power States: 1 00:23:55.472 Current Power State: Power State #0 00:23:55.472 Power State #0: 00:23:55.472 Max Power: 0.00 W 00:23:55.472 Non-Operational State: Operational 00:23:55.472 Entry Latency: Not Reported 00:23:55.472 Exit Latency: Not Reported 00:23:55.472 Relative Read Throughput: 0 00:23:55.472 Relative Read Latency: 0 00:23:55.472 Relative Write Throughput: 0 00:23:55.472 Relative Write Latency: 0 00:23:55.472 Idle Power: Not Reported 00:23:55.472 Active Power: Not Reported 00:23:55.472 Non-Operational Permissive Mode: Not Supported 00:23:55.472 00:23:55.472 Health Information 00:23:55.472 ================== 00:23:55.472 Critical Warnings: 00:23:55.472 Available Spare Space: OK 00:23:55.472 Temperature: OK 00:23:55.472 Device Reliability: OK 00:23:55.472 Read Only: No 00:23:55.472 Volatile Memory Backup: OK 00:23:55.472 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:55.472 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:55.472 Available Spare: 0% 00:23:55.472 Available Spare Threshold: 0% 00:23:55.472 Life Percentage [2024-07-25 19:14:47.889337] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x187f00 00:23:55.472 [2024-07-25 19:14:47.889344] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.472 [2024-07-25 19:14:47.889364] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.472 [2024-07-25 19:14:47.889369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:55.472 [2024-07-25 19:14:47.889373] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x187f00 00:23:55.472 [2024-07-25 19:14:47.889398] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:55.472 [2024-07-25 19:14:47.889406] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 21331 doesn't match qid 00:23:55.472 [2024-07-25 19:14:47.889418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32698 cdw0:5 sqhd:3f40 p:0 m:0 dnr:0 00:23:55.472 [2024-07-25 19:14:47.889422] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 21331 doesn't match qid 00:23:55.472 [2024-07-25 19:14:47.889428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32698 cdw0:5 sqhd:3f40 p:0 m:0 dnr:0 00:23:55.472 [2024-07-25 19:14:47.889433] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 21331 doesn't match qid 00:23:55.472 [2024-07-25 19:14:47.889439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32698 cdw0:5 sqhd:3f40 p:0 m:0 dnr:0 00:23:55.472 [2024-07-25 19:14:47.889444] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 21331 doesn't match qid 00:23:55.472 [2024-07-25 19:14:47.889449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32698 cdw0:5 sqhd:3f40 p:0 m:0 dnr:0 00:23:55.472 [2024-07-25 19:14:47.889456] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x187f00 00:23:55.472 [2024-07-25 19:14:47.889462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.472 [2024-07-25 19:14:47.889486] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.472 [2024-07-25 19:14:47.889490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:23:55.472 [2024-07-25 19:14:47.889496] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.472 [2024-07-25 19:14:47.889502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.472 [2024-07-25 19:14:47.889507] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x187f00 00:23:55.472 [2024-07-25 19:14:47.889523] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.472 [2024-07-25 19:14:47.889528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:55.472 [2024-07-25 19:14:47.889533] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:55.472 [2024-07-25 19:14:47.889537] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:55.472 [2024-07-25 19:14:47.889541] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x187f00 00:23:55.472 [2024-07-25 19:14:47.889548] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.472 [2024-07-25 19:14:47.889554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.472 [2024-07-25 19:14:47.889570] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.472 [2024-07-25 19:14:47.889575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:23:55.472 [2024-07-25 19:14:47.889579] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x187f00 00:23:55.472 [2024-07-25 19:14:47.889586] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.472 [2024-07-25 19:14:47.889595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.472 [2024-07-25 19:14:47.889613] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.472 [2024-07-25 19:14:47.889617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:23:55.472 [2024-07-25 19:14:47.889622] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x187f00 00:23:55.472 [2024-07-25 19:14:47.889629] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.472 [2024-07-25 19:14:47.889635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.472 [2024-07-25 19:14:47.889651] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.472 [2024-07-25 19:14:47.889655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:23:55.472 [2024-07-25 19:14:47.889660] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x187f00 00:23:55.472 [2024-07-25 19:14:47.889667] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.472 [2024-07-25 19:14:47.889673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.472 [2024-07-25 19:14:47.889692] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.472 [2024-07-25 19:14:47.889697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:23:55.472 [2024-07-25 19:14:47.889702] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x187f00 00:23:55.472 [2024-07-25 19:14:47.889709] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.472 [2024-07-25 19:14:47.889715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.472 [2024-07-25 19:14:47.889735] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.472 [2024-07-25 19:14:47.889739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:23:55.472 [2024-07-25 19:14:47.889744] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x187f00 00:23:55.472 [2024-07-25 19:14:47.889752] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.472 [2024-07-25 19:14:47.889758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.472 [2024-07-25 19:14:47.889774] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.472 [2024-07-25 19:14:47.889779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:23:55.472 [2024-07-25 19:14:47.889783] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x187f00 00:23:55.472 [2024-07-25 19:14:47.889790] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.472 [2024-07-25 19:14:47.889797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.472 [2024-07-25 19:14:47.889816] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.472 [2024-07-25 19:14:47.889821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:55.472 [2024-07-25 19:14:47.889825] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x187f00 00:23:55.472 [2024-07-25 19:14:47.889832] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.472 [2024-07-25 19:14:47.889840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.472 [2024-07-25 19:14:47.889860] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.473 [2024-07-25 19:14:47.889864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:23:55.473 [2024-07-25 19:14:47.889869] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.889876] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.889882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.473 [2024-07-25 19:14:47.889905] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.473 [2024-07-25 19:14:47.889910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:23:55.473 [2024-07-25 19:14:47.889915] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.889922] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.889928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.473 [2024-07-25 19:14:47.889950] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.473 [2024-07-25 19:14:47.889955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:23:55.473 [2024-07-25 19:14:47.889959] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.889966] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.889972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.473 [2024-07-25 19:14:47.889993] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.473 [2024-07-25 19:14:47.889998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:23:55.473 [2024-07-25 19:14:47.890002] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.890009] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.890015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.473 [2024-07-25 19:14:47.890033] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.473 [2024-07-25 19:14:47.890037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:23:55.473 [2024-07-25 19:14:47.890041] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.890049] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.890055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.473 [2024-07-25 19:14:47.890071] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.473 [2024-07-25 19:14:47.890075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:23:55.473 [2024-07-25 19:14:47.890080] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.890088] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.890095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.473 [2024-07-25 19:14:47.890111] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.473 [2024-07-25 19:14:47.890115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:23:55.473 [2024-07-25 19:14:47.890120] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.890127] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.890133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.473 [2024-07-25 19:14:47.890149] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.473 [2024-07-25 19:14:47.890153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:23:55.473 [2024-07-25 19:14:47.890158] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.890165] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.890171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.473 [2024-07-25 19:14:47.890187] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.473 [2024-07-25 19:14:47.890191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:23:55.473 [2024-07-25 19:14:47.890196] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.890202] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.890209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.473 [2024-07-25 19:14:47.890227] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.473 [2024-07-25 19:14:47.890231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:23:55.473 [2024-07-25 19:14:47.890235] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.890243] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.890249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.473 [2024-07-25 19:14:47.890266] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.473 [2024-07-25 19:14:47.890271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:23:55.473 [2024-07-25 19:14:47.890275] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.890282] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.890288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.473 [2024-07-25 19:14:47.890304] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.473 [2024-07-25 19:14:47.890309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:23:55.473 [2024-07-25 19:14:47.890313] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.890321] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.890328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.473 [2024-07-25 19:14:47.890348] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.473 [2024-07-25 19:14:47.890353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:23:55.473 [2024-07-25 19:14:47.890357] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.890364] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.890371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.473 [2024-07-25 19:14:47.890387] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.473 [2024-07-25 19:14:47.890391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:23:55.473 [2024-07-25 19:14:47.890395] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.890402] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.890409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.473 [2024-07-25 19:14:47.890428] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.473 [2024-07-25 19:14:47.890432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:23:55.473 [2024-07-25 19:14:47.890437] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.890444] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.890450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.473 [2024-07-25 19:14:47.890467] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.473 [2024-07-25 19:14:47.890472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:23:55.473 [2024-07-25 19:14:47.890476] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.890483] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.890489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.473 [2024-07-25 19:14:47.890507] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.473 [2024-07-25 19:14:47.890511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:23:55.473 [2024-07-25 19:14:47.890516] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.890523] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.890529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.473 [2024-07-25 19:14:47.890550] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.473 [2024-07-25 19:14:47.890554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:23:55.473 [2024-07-25 19:14:47.890560] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x187f00 00:23:55.473 [2024-07-25 19:14:47.890567] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.890573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.474 [2024-07-25 19:14:47.890589] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.474 [2024-07-25 19:14:47.890594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:23:55.474 [2024-07-25 19:14:47.890598] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.890605] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.890611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.474 [2024-07-25 19:14:47.890632] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.474 [2024-07-25 19:14:47.890636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:23:55.474 [2024-07-25 19:14:47.890641] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.890648] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.890654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.474 [2024-07-25 19:14:47.890673] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.474 [2024-07-25 19:14:47.890678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:23:55.474 [2024-07-25 19:14:47.890682] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.890689] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.890695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.474 [2024-07-25 19:14:47.890713] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.474 [2024-07-25 19:14:47.890717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:23:55.474 [2024-07-25 19:14:47.890722] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.890729] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.890735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.474 [2024-07-25 19:14:47.890752] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.474 [2024-07-25 19:14:47.890757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:23:55.474 [2024-07-25 19:14:47.890761] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.890768] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.890775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.474 [2024-07-25 19:14:47.890790] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.474 [2024-07-25 19:14:47.890795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:23:55.474 [2024-07-25 19:14:47.890800] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.890808] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.890814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.474 [2024-07-25 19:14:47.890831] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.474 [2024-07-25 19:14:47.890836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:23:55.474 [2024-07-25 19:14:47.890840] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.890847] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.890854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.474 [2024-07-25 19:14:47.890871] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.474 [2024-07-25 19:14:47.890875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:23:55.474 [2024-07-25 19:14:47.890880] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.890887] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.890893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.474 [2024-07-25 19:14:47.890910] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.474 [2024-07-25 19:14:47.890915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:23:55.474 [2024-07-25 19:14:47.890920] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.890926] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.890933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.474 [2024-07-25 19:14:47.890949] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.474 [2024-07-25 19:14:47.890953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:23:55.474 [2024-07-25 19:14:47.890958] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.890965] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.890971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.474 [2024-07-25 19:14:47.890987] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.474 [2024-07-25 19:14:47.890991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:23:55.474 [2024-07-25 19:14:47.890996] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.891003] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.891009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.474 [2024-07-25 19:14:47.891025] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.474 [2024-07-25 19:14:47.891031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:23:55.474 [2024-07-25 19:14:47.891035] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.891042] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.891048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.474 [2024-07-25 19:14:47.891073] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.474 [2024-07-25 19:14:47.891077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:23:55.474 [2024-07-25 19:14:47.891082] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.891088] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.891095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.474 [2024-07-25 19:14:47.891111] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.474 [2024-07-25 19:14:47.891115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:55.474 [2024-07-25 19:14:47.891120] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.891127] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.891133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.474 [2024-07-25 19:14:47.891152] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.474 [2024-07-25 19:14:47.891157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:23:55.474 [2024-07-25 19:14:47.891161] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.891168] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.891174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.474 [2024-07-25 19:14:47.891195] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.474 [2024-07-25 19:14:47.891199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:23:55.474 [2024-07-25 19:14:47.891204] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.891211] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.891217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.474 [2024-07-25 19:14:47.891233] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.474 [2024-07-25 19:14:47.891237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:23:55.474 [2024-07-25 19:14:47.891242] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.891249] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.474 [2024-07-25 19:14:47.891255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.474 [2024-07-25 19:14:47.891272] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.474 [2024-07-25 19:14:47.891277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:23:55.474 [2024-07-25 19:14:47.891281] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x187f00 00:23:55.475 [2024-07-25 19:14:47.891288] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.475 [2024-07-25 19:14:47.891294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.475 [2024-07-25 19:14:47.891309] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.475 [2024-07-25 19:14:47.891313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:23:55.475 [2024-07-25 19:14:47.891317] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x187f00 00:23:55.475 [2024-07-25 19:14:47.891324] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.475 [2024-07-25 19:14:47.891331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.475 [2024-07-25 19:14:47.891350] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.475 [2024-07-25 19:14:47.891354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:23:55.475 [2024-07-25 19:14:47.891359] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x187f00 00:23:55.475 [2024-07-25 19:14:47.891365] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.475 [2024-07-25 19:14:47.891372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.475 [2024-07-25 19:14:47.891391] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.475 [2024-07-25 19:14:47.891396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:23:55.475 [2024-07-25 19:14:47.891400] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x187f00 00:23:55.475 [2024-07-25 19:14:47.891407] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.475 [2024-07-25 19:14:47.891413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.475 [2024-07-25 19:14:47.891432] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.475 [2024-07-25 19:14:47.891437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:23:55.475 [2024-07-25 19:14:47.891441] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x187f00 00:23:55.475 [2024-07-25 19:14:47.891448] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.475 [2024-07-25 19:14:47.891454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.475 [2024-07-25 19:14:47.891474] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.475 [2024-07-25 19:14:47.891478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:23:55.475 [2024-07-25 19:14:47.891482] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x187f00 00:23:55.475 [2024-07-25 19:14:47.891489] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.475 [2024-07-25 19:14:47.891496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.475 [2024-07-25 19:14:47.891516] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.475 [2024-07-25 19:14:47.891521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:23:55.475 [2024-07-25 19:14:47.891525] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x187f00 00:23:55.475 [2024-07-25 19:14:47.891532] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.475 [2024-07-25 19:14:47.891539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.475 [2024-07-25 19:14:47.891556] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.475 [2024-07-25 19:14:47.891561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:23:55.475 [2024-07-25 19:14:47.891565] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x187f00 00:23:55.475 [2024-07-25 19:14:47.891572] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.475 [2024-07-25 19:14:47.891578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.475 [2024-07-25 19:14:47.891601] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.475 [2024-07-25 19:14:47.891605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:23:55.475 [2024-07-25 19:14:47.891610] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x187f00 00:23:55.475 [2024-07-25 19:14:47.891617] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.475 [2024-07-25 19:14:47.891623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.475 [2024-07-25 19:14:47.891640] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.475 [2024-07-25 19:14:47.891645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:23:55.475 [2024-07-25 19:14:47.891649] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x187f00 00:23:55.475 [2024-07-25 19:14:47.891656] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.475 [2024-07-25 19:14:47.891663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.475 [2024-07-25 19:14:47.891682] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.475 [2024-07-25 19:14:47.891686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:23:55.475 [2024-07-25 19:14:47.891690] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x187f00 00:23:55.475 [2024-07-25 19:14:47.891697] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.475 [2024-07-25 19:14:47.891704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.475 [2024-07-25 19:14:47.891718] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.475 [2024-07-25 19:14:47.891723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:23:55.475 [2024-07-25 19:14:47.891727] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x187f00 00:23:55.475 [2024-07-25 19:14:47.891734] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.475 [2024-07-25 19:14:47.891741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.475 [2024-07-25 19:14:47.891758] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.475 [2024-07-25 19:14:47.891762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:23:55.475 [2024-07-25 19:14:47.891767] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x187f00 00:23:55.475 [2024-07-25 19:14:47.891774] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.475 [2024-07-25 19:14:47.891780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.475 [2024-07-25 19:14:47.891799] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.475 [2024-07-25 19:14:47.891804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:23:55.475 [2024-07-25 19:14:47.891808] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x187f00 00:23:55.475 [2024-07-25 19:14:47.891815] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.475 [2024-07-25 19:14:47.891821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.475 [2024-07-25 19:14:47.891837] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.475 [2024-07-25 19:14:47.891842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:23:55.475 [2024-07-25 19:14:47.891846] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x187f00 00:23:55.475 [2024-07-25 19:14:47.891853] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.475 [2024-07-25 19:14:47.891859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.475 [2024-07-25 19:14:47.891875] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.475 [2024-07-25 19:14:47.891880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:23:55.476 [2024-07-25 19:14:47.891884] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x187f00 00:23:55.476 [2024-07-25 19:14:47.891891] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.476 [2024-07-25 19:14:47.891897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.476 [2024-07-25 19:14:47.895910] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.476 [2024-07-25 19:14:47.895923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:23:55.476 [2024-07-25 19:14:47.895928] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x187f00 00:23:55.476 [2024-07-25 19:14:47.895935] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x187f00 00:23:55.476 [2024-07-25 19:14:47.895942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:55.476 [2024-07-25 19:14:47.895960] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:55.476 [2024-07-25 19:14:47.895964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0016 p:0 m:0 dnr:0 00:23:55.476 [2024-07-25 19:14:47.895969] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x187f00 00:23:55.476 [2024-07-25 19:14:47.895974] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:23:55.734 Used: 0% 00:23:55.734 Data Units Read: 0 00:23:55.734 Data Units Written: 0 00:23:55.734 Host Read Commands: 0 00:23:55.734 Host Write Commands: 0 00:23:55.735 Controller Busy Time: 0 minutes 00:23:55.735 Power Cycles: 0 00:23:55.735 Power On Hours: 0 hours 00:23:55.735 Unsafe Shutdowns: 0 00:23:55.735 Unrecoverable Media Errors: 0 00:23:55.735 Lifetime Error Log Entries: 0 00:23:55.735 Warning Temperature Time: 0 minutes 00:23:55.735 Critical Temperature Time: 0 minutes 00:23:55.735 00:23:55.735 Number of Queues 00:23:55.735 ================ 00:23:55.735 Number of I/O Submission Queues: 127 00:23:55.735 Number of I/O Completion Queues: 127 00:23:55.735 00:23:55.735 Active Namespaces 00:23:55.735 ================= 00:23:55.735 Namespace ID:1 00:23:55.735 Error Recovery Timeout: Unlimited 00:23:55.735 Command Set Identifier: NVM (00h) 00:23:55.735 Deallocate: Supported 00:23:55.735 Deallocated/Unwritten Error: Not Supported 00:23:55.735 Deallocated Read Value: Unknown 00:23:55.735 Deallocate in Write Zeroes: Not Supported 00:23:55.735 Deallocated Guard Field: 0xFFFF 00:23:55.735 Flush: Supported 00:23:55.735 Reservation: Supported 00:23:55.735 Namespace Sharing Capabilities: Multiple Controllers 00:23:55.735 Size (in LBAs): 131072 (0GiB) 00:23:55.735 Capacity (in LBAs): 131072 (0GiB) 00:23:55.735 Utilization (in LBAs): 131072 (0GiB) 00:23:55.735 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:55.735 EUI64: ABCDEF0123456789 00:23:55.735 UUID: c7abbd99-2c97-4a57-90cd-ef88d1fe7d05 00:23:55.735 Thin Provisioning: Not Supported 00:23:55.735 Per-NS Atomic Units: Yes 00:23:55.735 Atomic Boundary Size (Normal): 0 00:23:55.735 Atomic Boundary Size (PFail): 0 00:23:55.735 Atomic Boundary Offset: 0 00:23:55.735 Maximum Single Source Range Length: 65535 00:23:55.735 Maximum Copy Length: 65535 00:23:55.735 Maximum Source Range Count: 1 00:23:55.735 NGUID/EUI64 Never Reused: No 00:23:55.735 Namespace Write Protected: No 00:23:55.735 Number of LBA Formats: 1 00:23:55.735 Current LBA Format: LBA Format #00 00:23:55.735 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:55.735 00:23:55.735 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:55.735 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:55.735 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.735 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.735 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.735 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:55.735 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:55.735 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:55.735 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:23:55.735 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:55.735 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:55.735 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:23:55.735 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:55.735 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:55.735 rmmod nvme_rdma 00:23:55.735 rmmod nvme_fabrics 00:23:55.735 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:55.735 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:23:55.735 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:23:55.735 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 854439 ']' 00:23:55.735 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 854439 00:23:55.735 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 854439 ']' 00:23:55.735 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 854439 00:23:55.735 19:14:47 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:23:55.735 19:14:48 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:55.735 19:14:48 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 854439 00:23:55.735 19:14:48 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:55.735 19:14:48 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:55.735 19:14:48 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 854439' 00:23:55.735 killing process with pid 854439 00:23:55.735 19:14:48 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 854439 00:23:55.735 19:14:48 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 854439 00:23:55.993 19:14:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:55.993 19:14:48 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:23:55.993 00:23:55.993 real 0m7.824s 00:23:55.993 user 0m8.256s 00:23:55.993 sys 0m4.859s 00:23:55.993 19:14:48 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:55.993 19:14:48 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.993 ************************************ 00:23:55.993 END TEST nvmf_identify 00:23:55.993 ************************************ 00:23:55.993 19:14:48 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:23:55.993 19:14:48 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:55.993 19:14:48 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:55.993 19:14:48 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.993 ************************************ 00:23:55.993 START TEST nvmf_perf 00:23:55.993 ************************************ 00:23:55.993 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:23:56.252 * Looking for test storage... 00:23:56.252 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:56.252 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:56.252 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:56.252 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:56.252 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:56.252 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:56.252 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:56.252 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:56.252 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:56.252 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:56.252 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:56.252 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:56.252 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:23:56.253 19:14:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:24:02.820 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:24:02.820 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:24:02.820 Found net devices under 0000:af:00.0: mlx_0_0 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:24:02.820 Found net devices under 0000:af:00.1: mlx_0_1 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # rdma_device_init 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # uname 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:02.820 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:24:02.820 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:02.821 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:24:02.821 altname enp175s0f0np0 00:24:02.821 altname ens801f0np0 00:24:02.821 inet 192.168.100.8/24 scope global mlx_0_0 00:24:02.821 valid_lft forever preferred_lft forever 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:24:02.821 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:02.821 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:24:02.821 altname enp175s0f1np1 00:24:02.821 altname ens801f1np1 00:24:02.821 inet 192.168.100.9/24 scope global mlx_0_1 00:24:02.821 valid_lft forever preferred_lft forever 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:24:02.821 192.168.100.9' 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:24:02.821 192.168.100.9' 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # head -n 1 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:24:02.821 192.168.100.9' 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@458 -- # tail -n +2 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@458 -- # head -n 1 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=857947 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 857947 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 857947 ']' 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:02.821 19:14:54 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:02.821 [2024-07-25 19:14:54.347482] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:02.821 [2024-07-25 19:14:54.347529] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:02.821 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.821 [2024-07-25 19:14:54.419444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:02.821 [2024-07-25 19:14:54.498177] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:02.821 [2024-07-25 19:14:54.498213] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:02.821 [2024-07-25 19:14:54.498221] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:02.821 [2024-07-25 19:14:54.498226] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:02.821 [2024-07-25 19:14:54.498231] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:02.821 [2024-07-25 19:14:54.498280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.821 [2024-07-25 19:14:54.498325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:02.821 [2024-07-25 19:14:54.498340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:02.821 [2024-07-25 19:14:54.498345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.821 19:14:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:02.821 19:14:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:24:02.821 19:14:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:02.821 19:14:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:02.821 19:14:55 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:02.821 19:14:55 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:02.821 19:14:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:02.821 19:14:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:06.109 19:14:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:06.109 19:14:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:06.109 19:14:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:24:06.109 19:14:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:06.368 19:14:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:06.368 19:14:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:24:06.368 19:14:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:06.368 19:14:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:24:06.368 19:14:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:24:06.627 [2024-07-25 19:14:58.854079] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:24:06.627 [2024-07-25 19:14:58.873595] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x5905c0/0x59de70) succeed. 00:24:06.627 [2024-07-25 19:14:58.883035] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x591c00/0x61df00) succeed. 00:24:06.627 19:14:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:06.886 19:14:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:06.886 19:14:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:07.145 19:14:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:07.145 19:14:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:07.145 19:14:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:07.403 [2024-07-25 19:14:59.756554] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:07.403 19:14:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:24:07.661 19:14:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:24:07.661 19:14:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:24:07.661 19:14:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:07.661 19:14:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:24:09.037 Initializing NVMe Controllers 00:24:09.037 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:24:09.037 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:24:09.038 Initialization complete. Launching workers. 00:24:09.038 ======================================================== 00:24:09.038 Latency(us) 00:24:09.038 Device Information : IOPS MiB/s Average min max 00:24:09.038 PCIE (0000:5e:00.0) NSID 1 from core 0: 97856.21 382.25 326.55 36.22 7201.38 00:24:09.038 ======================================================== 00:24:09.038 Total : 97856.21 382.25 326.55 36.22 7201.38 00:24:09.038 00:24:09.038 19:15:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:09.038 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.324 Initializing NVMe Controllers 00:24:12.324 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:12.324 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:12.324 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:12.324 Initialization complete. Launching workers. 00:24:12.324 ======================================================== 00:24:12.324 Latency(us) 00:24:12.324 Device Information : IOPS MiB/s Average min max 00:24:12.324 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6449.02 25.19 154.23 48.89 8037.32 00:24:12.324 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5066.16 19.79 197.19 69.98 8050.07 00:24:12.324 ======================================================== 00:24:12.324 Total : 11515.19 44.98 173.13 48.89 8050.07 00:24:12.324 00:24:12.324 19:15:04 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:12.324 EAL: No free 2048 kB hugepages reported on node 1 00:24:15.610 Initializing NVMe Controllers 00:24:15.610 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:15.610 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:15.610 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:15.610 Initialization complete. Launching workers. 00:24:15.610 ======================================================== 00:24:15.610 Latency(us) 00:24:15.610 Device Information : IOPS MiB/s Average min max 00:24:15.610 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17474.98 68.26 1831.85 516.76 8450.61 00:24:15.610 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4032.00 15.75 7994.93 5750.11 10916.23 00:24:15.610 ======================================================== 00:24:15.610 Total : 21506.98 84.01 2987.27 516.76 10916.23 00:24:15.610 00:24:15.610 19:15:08 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:24:15.610 19:15:08 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:15.610 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.880 Initializing NVMe Controllers 00:24:20.880 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:20.880 Controller IO queue size 128, less than required. 00:24:20.880 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:20.880 Controller IO queue size 128, less than required. 00:24:20.880 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:20.880 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:20.880 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:20.880 Initialization complete. Launching workers. 00:24:20.880 ======================================================== 00:24:20.880 Latency(us) 00:24:20.880 Device Information : IOPS MiB/s Average min max 00:24:20.880 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3813.46 953.37 33769.79 16859.68 68509.67 00:24:20.880 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3984.41 996.10 31890.98 15426.64 48527.40 00:24:20.880 ======================================================== 00:24:20.880 Total : 7797.88 1949.47 32809.79 15426.64 68509.67 00:24:20.880 00:24:20.880 19:15:12 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:24:20.880 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.880 No valid NVMe controllers or AIO or URING devices found 00:24:20.880 Initializing NVMe Controllers 00:24:20.880 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:20.880 Controller IO queue size 128, less than required. 00:24:20.880 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:20.880 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:20.880 Controller IO queue size 128, less than required. 00:24:20.880 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:20.880 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:20.880 WARNING: Some requested NVMe devices were skipped 00:24:20.880 19:15:12 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:24:20.880 EAL: No free 2048 kB hugepages reported on node 1 00:24:25.071 Initializing NVMe Controllers 00:24:25.071 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:25.071 Controller IO queue size 128, less than required. 00:24:25.071 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:25.071 Controller IO queue size 128, less than required. 00:24:25.071 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:25.071 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:25.071 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:25.071 Initialization complete. Launching workers. 00:24:25.071 00:24:25.071 ==================== 00:24:25.071 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:25.071 RDMA transport: 00:24:25.071 dev name: mlx5_0 00:24:25.071 polls: 388956 00:24:25.071 idle_polls: 385954 00:24:25.071 completions: 42678 00:24:25.071 queued_requests: 1 00:24:25.071 total_send_wrs: 21339 00:24:25.071 send_doorbell_updates: 2770 00:24:25.071 total_recv_wrs: 21466 00:24:25.071 recv_doorbell_updates: 2772 00:24:25.071 --------------------------------- 00:24:25.071 00:24:25.071 ==================== 00:24:25.071 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:25.071 RDMA transport: 00:24:25.071 dev name: mlx5_0 00:24:25.071 polls: 395968 00:24:25.071 idle_polls: 395695 00:24:25.071 completions: 19902 00:24:25.071 queued_requests: 1 00:24:25.071 total_send_wrs: 9951 00:24:25.071 send_doorbell_updates: 253 00:24:25.071 total_recv_wrs: 10078 00:24:25.071 recv_doorbell_updates: 255 00:24:25.071 --------------------------------- 00:24:25.071 ======================================================== 00:24:25.071 Latency(us) 00:24:25.071 Device Information : IOPS MiB/s Average min max 00:24:25.071 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5334.49 1333.62 24064.11 11333.75 58528.86 00:24:25.071 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2487.50 621.87 51540.83 29888.89 76225.47 00:24:25.071 ======================================================== 00:24:25.071 Total : 7821.99 1955.50 32802.07 11333.75 76225.47 00:24:25.071 00:24:25.071 19:15:17 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:25.071 19:15:17 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:25.071 19:15:17 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:25.071 19:15:17 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:25.071 19:15:17 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:25.071 19:15:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:25.071 19:15:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:25.071 19:15:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:25.071 19:15:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:25.071 19:15:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:25.071 19:15:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:25.071 19:15:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:25.071 rmmod nvme_rdma 00:24:25.071 rmmod nvme_fabrics 00:24:25.071 19:15:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:25.071 19:15:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:25.071 19:15:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:25.071 19:15:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 857947 ']' 00:24:25.071 19:15:17 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 857947 00:24:25.071 19:15:17 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 857947 ']' 00:24:25.072 19:15:17 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 857947 00:24:25.072 19:15:17 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:24:25.072 19:15:17 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:25.072 19:15:17 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 857947 00:24:25.072 19:15:17 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:25.072 19:15:17 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:25.072 19:15:17 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 857947' 00:24:25.072 killing process with pid 857947 00:24:25.072 19:15:17 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 857947 00:24:25.072 19:15:17 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 857947 00:24:26.450 19:15:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:26.450 19:15:18 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:24:26.450 00:24:26.450 real 0m30.517s 00:24:26.450 user 1m39.890s 00:24:26.450 sys 0m5.454s 00:24:26.450 19:15:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:26.450 19:15:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:26.450 ************************************ 00:24:26.450 END TEST nvmf_perf 00:24:26.450 ************************************ 00:24:26.710 19:15:18 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:24:26.710 19:15:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:26.710 19:15:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:26.710 19:15:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.710 ************************************ 00:24:26.710 START TEST nvmf_fio_host 00:24:26.710 ************************************ 00:24:26.710 19:15:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:24:26.710 * Looking for test storage... 00:24:26.710 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:26.710 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:26.710 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:26.710 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:26.710 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:26.710 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:26.711 19:15:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:24:33.281 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:24:33.281 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:24:33.281 Found net devices under 0000:af:00.0: mlx_0_0 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:24:33.281 Found net devices under 0000:af:00.1: mlx_0_1 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # rdma_device_init 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # uname 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:24:33.281 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:24:33.282 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:33.282 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:24:33.282 altname enp175s0f0np0 00:24:33.282 altname ens801f0np0 00:24:33.282 inet 192.168.100.8/24 scope global mlx_0_0 00:24:33.282 valid_lft forever preferred_lft forever 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:24:33.282 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:33.282 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:24:33.282 altname enp175s0f1np1 00:24:33.282 altname ens801f1np1 00:24:33.282 inet 192.168.100.9/24 scope global mlx_0_1 00:24:33.282 valid_lft forever preferred_lft forever 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:24:33.282 192.168.100.9' 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:24:33.282 192.168.100.9' 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # head -n 1 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:24:33.282 192.168.100.9' 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@458 -- # tail -n +2 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@458 -- # head -n 1 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=865621 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 865621 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 865621 ']' 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:33.282 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.283 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:33.283 19:15:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.283 [2024-07-25 19:15:24.995077] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:33.283 [2024-07-25 19:15:24.995140] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:33.283 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.283 [2024-07-25 19:15:25.064501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:33.283 [2024-07-25 19:15:25.143721] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:33.283 [2024-07-25 19:15:25.143760] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:33.283 [2024-07-25 19:15:25.143767] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:33.283 [2024-07-25 19:15:25.143773] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:33.283 [2024-07-25 19:15:25.143778] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:33.283 [2024-07-25 19:15:25.143835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:33.283 [2024-07-25 19:15:25.143944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:33.283 [2024-07-25 19:15:25.143990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.283 [2024-07-25 19:15:25.143991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:33.540 19:15:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:33.540 19:15:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:24:33.541 19:15:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:33.799 [2024-07-25 19:15:26.045168] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x86edf0/0x8732e0) succeed. 00:24:33.799 [2024-07-25 19:15:26.054505] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x870430/0x8b4980) succeed. 00:24:33.799 19:15:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:33.799 19:15:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:33.799 19:15:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.799 19:15:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:34.057 Malloc1 00:24:34.057 19:15:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:34.315 19:15:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:34.573 19:15:26 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:34.573 [2024-07-25 19:15:26.991923] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:34.573 19:15:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:24:34.831 19:15:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:24:34.831 19:15:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:24:34.831 19:15:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:24:34.831 19:15:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:34.831 19:15:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:34.831 19:15:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:34.831 19:15:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:24:34.831 19:15:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:34.831 19:15:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:34.831 19:15:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:34.831 19:15:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:24:34.831 19:15:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:34.831 19:15:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:34.831 19:15:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:34.831 19:15:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:34.831 19:15:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:34.831 19:15:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:34.831 19:15:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:24:34.831 19:15:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:34.831 19:15:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:34.831 19:15:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:34.831 19:15:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:34.831 19:15:27 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:24:35.090 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:35.090 fio-3.35 00:24:35.090 Starting 1 thread 00:24:35.348 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.882 00:24:37.882 test: (groupid=0, jobs=1): err= 0: pid=866232: Thu Jul 25 19:15:29 2024 00:24:37.882 read: IOPS=17.2k, BW=67.1MiB/s (70.3MB/s)(134MiB/2004msec) 00:24:37.882 slat (nsec): min=1417, max=34207, avg=1528.60, stdev=414.17 00:24:37.882 clat (usec): min=1713, max=6633, avg=3699.42, stdev=93.28 00:24:37.882 lat (usec): min=1727, max=6634, avg=3700.95, stdev=93.18 00:24:37.882 clat percentiles (usec): 00:24:37.882 | 1.00th=[ 3654], 5.00th=[ 3687], 10.00th=[ 3687], 20.00th=[ 3687], 00:24:37.882 | 30.00th=[ 3687], 40.00th=[ 3687], 50.00th=[ 3687], 60.00th=[ 3687], 00:24:37.882 | 70.00th=[ 3720], 80.00th=[ 3720], 90.00th=[ 3720], 95.00th=[ 3720], 00:24:37.882 | 99.00th=[ 3752], 99.50th=[ 3818], 99.90th=[ 5211], 99.95th=[ 6128], 00:24:37.882 | 99.99th=[ 6587] 00:24:37.882 bw ( KiB/s): min=67344, max=69408, per=100.00%, avg=68694.00, stdev=943.35, samples=4 00:24:37.882 iops : min=16836, max=17352, avg=17173.50, stdev=235.84, samples=4 00:24:37.882 write: IOPS=17.2k, BW=67.2MiB/s (70.4MB/s)(135MiB/2004msec); 0 zone resets 00:24:37.882 slat (nsec): min=1447, max=17102, avg=1593.93, stdev=402.34 00:24:37.882 clat (usec): min=2516, max=6627, avg=3697.33, stdev=84.39 00:24:37.882 lat (usec): min=2527, max=6628, avg=3698.93, stdev=84.28 00:24:37.882 clat percentiles (usec): 00:24:37.882 | 1.00th=[ 3654], 5.00th=[ 3687], 10.00th=[ 3687], 20.00th=[ 3687], 00:24:37.882 | 30.00th=[ 3687], 40.00th=[ 3687], 50.00th=[ 3687], 60.00th=[ 3687], 00:24:37.882 | 70.00th=[ 3720], 80.00th=[ 3720], 90.00th=[ 3720], 95.00th=[ 3720], 00:24:37.882 | 99.00th=[ 3752], 99.50th=[ 3818], 99.90th=[ 4752], 99.95th=[ 5669], 00:24:37.882 | 99.99th=[ 6587] 00:24:37.882 bw ( KiB/s): min=67504, max=69568, per=100.00%, avg=68790.00, stdev=892.93, samples=4 00:24:37.882 iops : min=16876, max=17392, avg=17197.50, stdev=223.23, samples=4 00:24:37.882 lat (msec) : 2=0.01%, 4=99.57%, 10=0.43% 00:24:37.882 cpu : usr=99.45%, sys=0.10%, ctx=16, majf=0, minf=4 00:24:37.882 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:37.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:37.882 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:37.882 issued rwts: total=34413,34456,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:37.882 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:37.882 00:24:37.882 Run status group 0 (all jobs): 00:24:37.882 READ: bw=67.1MiB/s (70.3MB/s), 67.1MiB/s-67.1MiB/s (70.3MB/s-70.3MB/s), io=134MiB (141MB), run=2004-2004msec 00:24:37.882 WRITE: bw=67.2MiB/s (70.4MB/s), 67.2MiB/s-67.2MiB/s (70.4MB/s-70.4MB/s), io=135MiB (141MB), run=2004-2004msec 00:24:37.882 19:15:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:24:37.882 19:15:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:24:37.882 19:15:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:37.882 19:15:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:37.882 19:15:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:37.882 19:15:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:24:37.882 19:15:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:37.882 19:15:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:37.882 19:15:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:37.882 19:15:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:24:37.882 19:15:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:37.882 19:15:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:37.882 19:15:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:37.882 19:15:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:37.882 19:15:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:37.882 19:15:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:24:37.882 19:15:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:37.882 19:15:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:37.882 19:15:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:37.882 19:15:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:37.882 19:15:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:37.882 19:15:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:24:37.882 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:37.882 fio-3.35 00:24:37.882 Starting 1 thread 00:24:37.882 EAL: No free 2048 kB hugepages reported on node 1 00:24:40.417 00:24:40.417 test: (groupid=0, jobs=1): err= 0: pid=866808: Thu Jul 25 19:15:32 2024 00:24:40.417 read: IOPS=13.7k, BW=214MiB/s (225MB/s)(421MiB/1963msec) 00:24:40.417 slat (nsec): min=2350, max=44996, avg=2700.14, stdev=1060.99 00:24:40.417 clat (usec): min=542, max=8339, avg=1720.83, stdev=1376.78 00:24:40.417 lat (usec): min=544, max=8359, avg=1723.53, stdev=1377.11 00:24:40.417 clat percentiles (usec): 00:24:40.417 | 1.00th=[ 717], 5.00th=[ 816], 10.00th=[ 873], 20.00th=[ 963], 00:24:40.417 | 30.00th=[ 1029], 40.00th=[ 1106], 50.00th=[ 1237], 60.00th=[ 1352], 00:24:40.417 | 70.00th=[ 1500], 80.00th=[ 1696], 90.00th=[ 4948], 95.00th=[ 5145], 00:24:40.417 | 99.00th=[ 6521], 99.50th=[ 7111], 99.90th=[ 7767], 99.95th=[ 7832], 00:24:40.417 | 99.99th=[ 8291] 00:24:40.417 bw ( KiB/s): min=106208, max=108896, per=48.95%, avg=107448.00, stdev=1110.17, samples=4 00:24:40.417 iops : min= 6638, max= 6806, avg=6715.50, stdev=69.39, samples=4 00:24:40.417 write: IOPS=7869, BW=123MiB/s (129MB/s)(219MiB/1781msec); 0 zone resets 00:24:40.417 slat (usec): min=27, max=134, avg=30.09, stdev= 5.63 00:24:40.417 clat (usec): min=4902, max=19415, avg=13121.44, stdev=1948.44 00:24:40.417 lat (usec): min=4932, max=19446, avg=13151.53, stdev=1948.15 00:24:40.417 clat percentiles (usec): 00:24:40.417 | 1.00th=[ 7373], 5.00th=[10290], 10.00th=[10814], 20.00th=[11600], 00:24:40.417 | 30.00th=[12125], 40.00th=[12518], 50.00th=[13042], 60.00th=[13566], 00:24:40.417 | 70.00th=[14091], 80.00th=[14615], 90.00th=[15664], 95.00th=[16450], 00:24:40.417 | 99.00th=[17957], 99.50th=[18482], 99.90th=[19006], 99.95th=[19268], 00:24:40.417 | 99.99th=[19268] 00:24:40.417 bw ( KiB/s): min=107904, max=113248, per=88.31%, avg=111192.00, stdev=2300.05, samples=4 00:24:40.417 iops : min= 6744, max= 7078, avg=6949.50, stdev=143.75, samples=4 00:24:40.417 lat (usec) : 750=1.23%, 1000=15.71% 00:24:40.417 lat (msec) : 2=39.25%, 4=2.33%, 10=8.44%, 20=33.05% 00:24:40.417 cpu : usr=96.96%, sys=1.55%, ctx=183, majf=0, minf=3 00:24:40.417 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:24:40.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:40.417 issued rwts: total=26932,14016,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.417 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:40.417 00:24:40.417 Run status group 0 (all jobs): 00:24:40.417 READ: bw=214MiB/s (225MB/s), 214MiB/s-214MiB/s (225MB/s-225MB/s), io=421MiB (441MB), run=1963-1963msec 00:24:40.417 WRITE: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=219MiB (230MB), run=1781-1781msec 00:24:40.417 19:15:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:40.417 19:15:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:40.417 19:15:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:40.417 19:15:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:40.417 19:15:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:40.417 19:15:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:40.417 19:15:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:40.417 19:15:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:40.417 19:15:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:40.417 19:15:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:40.417 19:15:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:40.417 19:15:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:40.417 rmmod nvme_rdma 00:24:40.417 rmmod nvme_fabrics 00:24:40.417 19:15:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:40.417 19:15:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:40.417 19:15:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:40.417 19:15:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 865621 ']' 00:24:40.417 19:15:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 865621 00:24:40.417 19:15:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 865621 ']' 00:24:40.417 19:15:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 865621 00:24:40.417 19:15:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:24:40.417 19:15:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:40.417 19:15:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 865621 00:24:40.417 19:15:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:40.417 19:15:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:40.417 19:15:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 865621' 00:24:40.417 killing process with pid 865621 00:24:40.417 19:15:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 865621 00:24:40.417 19:15:32 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 865621 00:24:40.676 19:15:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:40.676 19:15:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:24:40.676 00:24:40.676 real 0m14.129s 00:24:40.676 user 0m49.633s 00:24:40.676 sys 0m5.359s 00:24:40.676 19:15:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:40.676 19:15:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.676 ************************************ 00:24:40.676 END TEST nvmf_fio_host 00:24:40.677 ************************************ 00:24:40.936 19:15:33 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:24:40.936 19:15:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:40.936 19:15:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:40.936 19:15:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.936 ************************************ 00:24:40.936 START TEST nvmf_failover 00:24:40.936 ************************************ 00:24:40.936 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:24:40.936 * Looking for test storage... 00:24:40.936 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:24:40.937 19:15:33 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:24:47.506 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:24:47.506 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:24:47.506 Found net devices under 0000:af:00.0: mlx_0_0 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:24:47.506 Found net devices under 0000:af:00.1: mlx_0_1 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # rdma_device_init 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # uname 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # allocate_nic_ips 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:47.506 19:15:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:47.506 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:47.506 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:47.506 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:47.506 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:47.506 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:47.506 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:24:47.506 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:47.506 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:47.506 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:47.506 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:47.506 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:47.506 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:47.506 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:24:47.506 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:47.506 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:24:47.506 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:47.506 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:47.506 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:47.506 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:47.506 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:47.506 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:47.506 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:24:47.506 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:47.506 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:24:47.506 altname enp175s0f0np0 00:24:47.506 altname ens801f0np0 00:24:47.506 inet 192.168.100.8/24 scope global mlx_0_0 00:24:47.506 valid_lft forever preferred_lft forever 00:24:47.506 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:24:47.507 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:47.507 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:24:47.507 altname enp175s0f1np1 00:24:47.507 altname ens801f1np1 00:24:47.507 inet 192.168.100.9/24 scope global mlx_0_1 00:24:47.507 valid_lft forever preferred_lft forever 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:24:47.507 192.168.100.9' 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:24:47.507 192.168.100.9' 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # head -n 1 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@458 -- # tail -n +2 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:24:47.507 192.168.100.9' 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@458 -- # head -n 1 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=870360 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 870360 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 870360 ']' 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:47.507 19:15:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:47.507 [2024-07-25 19:15:39.205831] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:47.507 [2024-07-25 19:15:39.205881] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.507 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.507 [2024-07-25 19:15:39.274353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:47.507 [2024-07-25 19:15:39.353481] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:47.507 [2024-07-25 19:15:39.353518] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:47.507 [2024-07-25 19:15:39.353525] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:47.507 [2024-07-25 19:15:39.353532] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:47.507 [2024-07-25 19:15:39.353537] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:47.507 [2024-07-25 19:15:39.353591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:47.507 [2024-07-25 19:15:39.353696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.507 [2024-07-25 19:15:39.353698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:47.766 19:15:40 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:47.766 19:15:40 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:47.766 19:15:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:47.766 19:15:40 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:47.766 19:15:40 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:47.766 19:15:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:47.766 19:15:40 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:48.025 [2024-07-25 19:15:40.286589] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2356580/0x235aa70) succeed. 00:24:48.025 [2024-07-25 19:15:40.295880] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2357b20/0x239c110) succeed. 00:24:48.025 19:15:40 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:48.283 Malloc0 00:24:48.283 19:15:40 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:48.542 19:15:40 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:48.801 19:15:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:48.801 [2024-07-25 19:15:41.191267] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:48.801 19:15:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:24:49.060 [2024-07-25 19:15:41.387707] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:24:49.060 19:15:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:24:49.319 [2024-07-25 19:15:41.592446] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:24:49.319 19:15:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=870734 00:24:49.319 19:15:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:49.319 19:15:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:49.319 19:15:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 870734 /var/tmp/bdevperf.sock 00:24:49.319 19:15:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 870734 ']' 00:24:49.319 19:15:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:49.319 19:15:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:49.319 19:15:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:49.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:49.319 19:15:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:49.319 19:15:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:50.256 19:15:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:50.256 19:15:42 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:50.256 19:15:42 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:50.515 NVMe0n1 00:24:50.515 19:15:42 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:50.774 00:24:50.774 19:15:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=870979 00:24:50.774 19:15:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:50.774 19:15:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:51.710 19:15:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:51.969 19:15:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:55.257 19:15:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:55.257 00:24:55.257 19:15:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:24:55.514 19:15:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:58.800 19:15:50 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:58.800 [2024-07-25 19:15:50.914416] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:58.800 19:15:50 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:59.799 19:15:51 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:24:59.799 19:15:52 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 870979 00:25:06.523 0 00:25:06.523 19:15:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 870734 00:25:06.523 19:15:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 870734 ']' 00:25:06.523 19:15:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 870734 00:25:06.523 19:15:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:06.523 19:15:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:06.523 19:15:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 870734 00:25:06.523 19:15:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:06.523 19:15:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:06.523 19:15:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 870734' 00:25:06.523 killing process with pid 870734 00:25:06.523 19:15:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 870734 00:25:06.523 19:15:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 870734 00:25:06.523 19:15:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:06.523 [2024-07-25 19:15:41.648749] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:06.523 [2024-07-25 19:15:41.648800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid870734 ] 00:25:06.523 EAL: No free 2048 kB hugepages reported on node 1 00:25:06.523 [2024-07-25 19:15:41.718207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.523 [2024-07-25 19:15:41.795593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.523 Running I/O for 15 seconds... 00:25:06.523 [2024-07-25 19:15:45.230779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x187c00 00:25:06.523 [2024-07-25 19:15:45.230813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.523 [2024-07-25 19:15:45.230830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x187c00 00:25:06.523 [2024-07-25 19:15:45.230838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.523 [2024-07-25 19:15:45.230848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x187c00 00:25:06.523 [2024-07-25 19:15:45.230855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.523 [2024-07-25 19:15:45.230863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.230870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.230879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.230885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.230893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.230903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.230912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.230919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.230927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.230933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.230942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.230949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.230957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.230969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.230978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.230984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.230993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.230999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.231008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.231015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.231023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.231030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.231038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.231044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.231053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.231059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.231067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.231074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.231084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.231090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.231098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.231105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.231113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.231120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.231127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.231134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.231143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.231150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.231158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.231165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.231173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.231179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.231187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.231194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.231202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.231209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.231217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.231224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.231232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.231238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.231247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.231253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.231261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.231268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.231276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.231282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.231290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.231297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.231305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.231313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.231322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.231330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.231338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.231345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.231353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.231360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.231368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.231374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.231383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x187c00 00:25:06.524 [2024-07-25 19:15:45.231389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.524 [2024-07-25 19:15:45.231397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x187c00 00:25:06.525 [2024-07-25 19:15:45.231908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.525 [2024-07-25 19:15:45.231916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x187c00 00:25:06.526 [2024-07-25 19:15:45.231922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.526 [2024-07-25 19:15:45.231930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x187c00 00:25:06.526 [2024-07-25 19:15:45.231937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.526 [2024-07-25 19:15:45.231945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x187c00 00:25:06.526 [2024-07-25 19:15:45.231951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.526 [2024-07-25 19:15:45.231959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x187c00 00:25:06.526 [2024-07-25 19:15:45.231966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.526 [2024-07-25 19:15:45.231974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x187c00 00:25:06.526 [2024-07-25 19:15:45.231981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.526 [2024-07-25 19:15:45.231993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x187c00 00:25:06.526 [2024-07-25 19:15:45.232003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.526 [2024-07-25 19:15:45.232011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x187c00 00:25:06.526 [2024-07-25 19:15:45.232018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.526 [2024-07-25 19:15:45.232026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x187c00 00:25:06.526 [2024-07-25 19:15:45.232032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.526 [2024-07-25 19:15:45.232041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x187c00 00:25:06.526 [2024-07-25 19:15:45.232047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.526 [2024-07-25 19:15:45.232056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x187c00 00:25:06.526 [2024-07-25 19:15:45.232062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.526 [2024-07-25 19:15:45.232070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x187c00 00:25:06.526 [2024-07-25 19:15:45.232077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.526 [2024-07-25 19:15:45.232085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x187c00 00:25:06.526 [2024-07-25 19:15:45.232091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.526 [2024-07-25 19:15:45.232100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x187c00 00:25:06.526 [2024-07-25 19:15:45.232106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.526 [2024-07-25 19:15:45.232115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x187c00 00:25:06.526 [2024-07-25 19:15:45.232121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.526 [2024-07-25 19:15:45.232129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x187c00 00:25:06.526 [2024-07-25 19:15:45.232136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.526 [2024-07-25 19:15:45.232144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x187c00 00:25:06.526 [2024-07-25 19:15:45.232150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.526 [2024-07-25 19:15:45.232160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x187c00 00:25:06.526 [2024-07-25 19:15:45.232167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.526 [2024-07-25 19:15:45.232175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x187c00 00:25:06.526 [2024-07-25 19:15:45.232182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.526 [2024-07-25 19:15:45.232190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x187c00 00:25:06.526 [2024-07-25 19:15:45.232196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.526 [2024-07-25 19:15:45.232204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x187c00 00:25:06.526 [2024-07-25 19:15:45.232211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.526 [2024-07-25 19:15:45.232219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x187c00 00:25:06.526 [2024-07-25 19:15:45.232226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.526 [2024-07-25 19:15:45.232234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x187c00 00:25:06.526 [2024-07-25 19:15:45.232242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.526 [2024-07-25 19:15:45.232250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x187c00 00:25:06.526 [2024-07-25 19:15:45.232256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.526 [2024-07-25 19:15:45.232264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x187c00 00:25:06.526 [2024-07-25 19:15:45.232271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.526 [2024-07-25 19:15:45.232279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x187c00 00:25:06.526 [2024-07-25 19:15:45.232287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.526 [2024-07-25 19:15:45.232296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x187c00 00:25:06.526 [2024-07-25 19:15:45.232302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.526 [2024-07-25 19:15:45.232310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x187c00 00:25:06.526 [2024-07-25 19:15:45.232316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.526 [2024-07-25 19:15:45.232325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x187c00 00:25:06.526 [2024-07-25 19:15:45.232333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.526 [2024-07-25 19:15:45.232341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x187c00 00:25:06.526 [2024-07-25 19:15:45.232347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.526 [2024-07-25 19:15:45.232355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x187c00 00:25:06.526 [2024-07-25 19:15:45.232362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.526 [2024-07-25 19:15:45.232370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x187c00 00:25:06.527 [2024-07-25 19:15:45.232377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.527 [2024-07-25 19:15:45.232385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x187c00 00:25:06.527 [2024-07-25 19:15:45.232392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.527 [2024-07-25 19:15:45.232400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x187c00 00:25:06.527 [2024-07-25 19:15:45.232406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.527 [2024-07-25 19:15:45.232415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x187c00 00:25:06.527 [2024-07-25 19:15:45.232421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.527 [2024-07-25 19:15:45.232429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x187c00 00:25:06.527 [2024-07-25 19:15:45.232436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.527 [2024-07-25 19:15:45.232444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x187c00 00:25:06.527 [2024-07-25 19:15:45.232450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.527 [2024-07-25 19:15:45.232458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x187c00 00:25:06.527 [2024-07-25 19:15:45.232465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.527 [2024-07-25 19:15:45.232473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x187c00 00:25:06.527 [2024-07-25 19:15:45.232481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.527 [2024-07-25 19:15:45.232489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x187c00 00:25:06.527 [2024-07-25 19:15:45.232495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.527 [2024-07-25 19:15:45.232505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x187c00 00:25:06.527 [2024-07-25 19:15:45.232512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.527 [2024-07-25 19:15:45.232520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x187c00 00:25:06.527 [2024-07-25 19:15:45.232527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.527 [2024-07-25 19:15:45.232535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x187c00 00:25:06.527 [2024-07-25 19:15:45.232541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.527 [2024-07-25 19:15:45.240674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x187c00 00:25:06.527 [2024-07-25 19:15:45.240684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.527 [2024-07-25 19:15:45.240694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x187c00 00:25:06.527 [2024-07-25 19:15:45.240701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.527 [2024-07-25 19:15:45.240710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x187c00 00:25:06.527 [2024-07-25 19:15:45.240716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.527 [2024-07-25 19:15:45.240725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x187c00 00:25:06.527 [2024-07-25 19:15:45.240731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.527 [2024-07-25 19:15:45.240739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x187c00 00:25:06.527 [2024-07-25 19:15:45.240746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.527 [2024-07-25 19:15:45.240754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x187c00 00:25:06.527 [2024-07-25 19:15:45.240761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.527 [2024-07-25 19:15:45.240769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.527 [2024-07-25 19:15:45.240775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.527 [2024-07-25 19:15:45.240783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.527 [2024-07-25 19:15:45.240790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.527 [2024-07-25 19:15:45.240798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.527 [2024-07-25 19:15:45.240806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.527 [2024-07-25 19:15:45.240814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.527 [2024-07-25 19:15:45.240821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.527 [2024-07-25 19:15:45.240829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.527 [2024-07-25 19:15:45.240836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.527 [2024-07-25 19:15:45.240844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.527 [2024-07-25 19:15:45.240851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.527 [2024-07-25 19:15:45.242312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.527 [2024-07-25 19:15:45.242323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.527 [2024-07-25 19:15:45.242330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21552 len:8 PRP1 0x0 PRP2 0x0 00:25:06.527 [2024-07-25 19:15:45.242338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.527 [2024-07-25 19:15:45.242378] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:25:06.527 [2024-07-25 19:15:45.242387] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:25:06.527 [2024-07-25 19:15:45.242395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:06.527 [2024-07-25 19:15:45.242429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.527 [2024-07-25 19:15:45.242438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.527 [2024-07-25 19:15:45.242446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.527 [2024-07-25 19:15:45.242452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.527 [2024-07-25 19:15:45.242459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.527 [2024-07-25 19:15:45.242465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.527 [2024-07-25 19:15:45.242472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.527 [2024-07-25 19:15:45.242478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.527 [2024-07-25 19:15:45.259707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:06.527 [2024-07-25 19:15:45.259721] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:25:06.527 [2024-07-25 19:15:45.259729] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:06.527 [2024-07-25 19:15:45.262617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:06.527 [2024-07-25 19:15:45.307420] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:06.527 [2024-07-25 19:15:48.728759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.527 [2024-07-25 19:15:48.728792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.527 [2024-07-25 19:15:48.728806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x187c00 00:25:06.528 [2024-07-25 19:15:48.728814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.528 [2024-07-25 19:15:48.728823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:103928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x187c00 00:25:06.528 [2024-07-25 19:15:48.728830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.528 [2024-07-25 19:15:48.728839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x187c00 00:25:06.528 [2024-07-25 19:15:48.728846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.528 [2024-07-25 19:15:48.728854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:103944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x187c00 00:25:06.528 [2024-07-25 19:15:48.728862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.528 [2024-07-25 19:15:48.728870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:103952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x187c00 00:25:06.528 [2024-07-25 19:15:48.728877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.528 [2024-07-25 19:15:48.728885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:103960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x187c00 00:25:06.528 [2024-07-25 19:15:48.728892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.528 [2024-07-25 19:15:48.728905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:103968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x187c00 00:25:06.528 [2024-07-25 19:15:48.728912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.528 [2024-07-25 19:15:48.728920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x187c00 00:25:06.528 [2024-07-25 19:15:48.728927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.528 [2024-07-25 19:15:48.728935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:104560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.528 [2024-07-25 19:15:48.728941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.528 [2024-07-25 19:15:48.728949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:104568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.528 [2024-07-25 19:15:48.728956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.528 [2024-07-25 19:15:48.728964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.528 [2024-07-25 19:15:48.728976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.528 [2024-07-25 19:15:48.728984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:104584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.528 [2024-07-25 19:15:48.728990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.528 [2024-07-25 19:15:48.728998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:104592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.528 [2024-07-25 19:15:48.729005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.528 [2024-07-25 19:15:48.729013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.528 [2024-07-25 19:15:48.729020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.528 [2024-07-25 19:15:48.729028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.528 [2024-07-25 19:15:48.729034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.528 [2024-07-25 19:15:48.729043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:104616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.528 [2024-07-25 19:15:48.729049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.528 [2024-07-25 19:15:48.729058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:103984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x187c00 00:25:06.528 [2024-07-25 19:15:48.729065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.528 [2024-07-25 19:15:48.729073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x187c00 00:25:06.528 [2024-07-25 19:15:48.729079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.528 [2024-07-25 19:15:48.729088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x187c00 00:25:06.528 [2024-07-25 19:15:48.729094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.528 [2024-07-25 19:15:48.729103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:104008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x187c00 00:25:06.528 [2024-07-25 19:15:48.729109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.528 [2024-07-25 19:15:48.729117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:104016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x187c00 00:25:06.528 [2024-07-25 19:15:48.729124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.528 [2024-07-25 19:15:48.729132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:104024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x187c00 00:25:06.528 [2024-07-25 19:15:48.729138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.528 [2024-07-25 19:15:48.729147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:104032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x187c00 00:25:06.528 [2024-07-25 19:15:48.729155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.528 [2024-07-25 19:15:48.729164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:104040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x187c00 00:25:06.528 [2024-07-25 19:15:48.729170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.528 [2024-07-25 19:15:48.729182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:104048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x187c00 00:25:06.528 [2024-07-25 19:15:48.729190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.528 [2024-07-25 19:15:48.729198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:104056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x187c00 00:25:06.528 [2024-07-25 19:15:48.729204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.528 [2024-07-25 19:15:48.729213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:104064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x187c00 00:25:06.528 [2024-07-25 19:15:48.729219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:104072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x187c00 00:25:06.529 [2024-07-25 19:15:48.729234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:104080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x187c00 00:25:06.529 [2024-07-25 19:15:48.729249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:104088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x187c00 00:25:06.529 [2024-07-25 19:15:48.729263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:104096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x187c00 00:25:06.529 [2024-07-25 19:15:48.729278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:104104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x187c00 00:25:06.529 [2024-07-25 19:15:48.729293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:104624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.529 [2024-07-25 19:15:48.729308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:104632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.529 [2024-07-25 19:15:48.729323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:104640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.529 [2024-07-25 19:15:48.729339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.529 [2024-07-25 19:15:48.729354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:104656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.529 [2024-07-25 19:15:48.729368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.529 [2024-07-25 19:15:48.729382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:104672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.529 [2024-07-25 19:15:48.729396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.529 [2024-07-25 19:15:48.729411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:104112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x187c00 00:25:06.529 [2024-07-25 19:15:48.729426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:104120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x187c00 00:25:06.529 [2024-07-25 19:15:48.729440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:104128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x187c00 00:25:06.529 [2024-07-25 19:15:48.729455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:104136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x187c00 00:25:06.529 [2024-07-25 19:15:48.729469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:104144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x187c00 00:25:06.529 [2024-07-25 19:15:48.729484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:104152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x187c00 00:25:06.529 [2024-07-25 19:15:48.729499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:104160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x187c00 00:25:06.529 [2024-07-25 19:15:48.729514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:104168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x187c00 00:25:06.529 [2024-07-25 19:15:48.729529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.529 [2024-07-25 19:15:48.729545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.529 [2024-07-25 19:15:48.729560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.529 [2024-07-25 19:15:48.729574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.529 [2024-07-25 19:15:48.729588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.529 [2024-07-25 19:15:48.729603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.529 [2024-07-25 19:15:48.729617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:104736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.529 [2024-07-25 19:15:48.729631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:104744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.529 [2024-07-25 19:15:48.729645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:104176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x187c00 00:25:06.529 [2024-07-25 19:15:48.729660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x187c00 00:25:06.529 [2024-07-25 19:15:48.729676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:104192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x187c00 00:25:06.529 [2024-07-25 19:15:48.729691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:104200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x187c00 00:25:06.529 [2024-07-25 19:15:48.729706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:104208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x187c00 00:25:06.529 [2024-07-25 19:15:48.729721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.529 [2024-07-25 19:15:48.729729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x187c00 00:25:06.530 [2024-07-25 19:15:48.729735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.729744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:104224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x187c00 00:25:06.530 [2024-07-25 19:15:48.729750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.729758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x187c00 00:25:06.530 [2024-07-25 19:15:48.729765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.729773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:104240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x187c00 00:25:06.530 [2024-07-25 19:15:48.729779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.729788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:104248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x187c00 00:25:06.530 [2024-07-25 19:15:48.729794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.729802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:104256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x187c00 00:25:06.530 [2024-07-25 19:15:48.729809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.729817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:104264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x187c00 00:25:06.530 [2024-07-25 19:15:48.729823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.729831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:104272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x187c00 00:25:06.530 [2024-07-25 19:15:48.729838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.729847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:104280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x187c00 00:25:06.530 [2024-07-25 19:15:48.729854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.729862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:104288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x187c00 00:25:06.530 [2024-07-25 19:15:48.729868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.729876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x187c00 00:25:06.530 [2024-07-25 19:15:48.729883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.729891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.530 [2024-07-25 19:15:48.729897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.729910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.530 [2024-07-25 19:15:48.729917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.729925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.530 [2024-07-25 19:15:48.729931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.729940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.530 [2024-07-25 19:15:48.729946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.729954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.530 [2024-07-25 19:15:48.729960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.729968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:104792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.530 [2024-07-25 19:15:48.729975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.729983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:104800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.530 [2024-07-25 19:15:48.729989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.729997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:104808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.530 [2024-07-25 19:15:48.730004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.730012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:104304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x187c00 00:25:06.530 [2024-07-25 19:15:48.730018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.730028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:104312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x187c00 00:25:06.530 [2024-07-25 19:15:48.730035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.730043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:104320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x187c00 00:25:06.530 [2024-07-25 19:15:48.730049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.730057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:104328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x187c00 00:25:06.530 [2024-07-25 19:15:48.730064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.730072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x187c00 00:25:06.530 [2024-07-25 19:15:48.730079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.730087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:104344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x187c00 00:25:06.530 [2024-07-25 19:15:48.730093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.730101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:104352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x187c00 00:25:06.530 [2024-07-25 19:15:48.730108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.730116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:104360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x187c00 00:25:06.530 [2024-07-25 19:15:48.730123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.730132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:104816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.530 [2024-07-25 19:15:48.730139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.730147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:104824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.530 [2024-07-25 19:15:48.730153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.730161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:104832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.530 [2024-07-25 19:15:48.730167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.730175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.530 [2024-07-25 19:15:48.730181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.730190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:104848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.530 [2024-07-25 19:15:48.730198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.730206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:104856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.530 [2024-07-25 19:15:48.730212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.730220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:104864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.530 [2024-07-25 19:15:48.730227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.730234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.530 [2024-07-25 19:15:48.730241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.730249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x187c00 00:25:06.530 [2024-07-25 19:15:48.730256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.530 [2024-07-25 19:15:48.730264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:104376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x187c00 00:25:06.530 [2024-07-25 19:15:48.730270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.531 [2024-07-25 19:15:48.730278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:104384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x187c00 00:25:06.531 [2024-07-25 19:15:48.730284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.531 [2024-07-25 19:15:48.730293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:104392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x187c00 00:25:06.531 [2024-07-25 19:15:48.730299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.531 [2024-07-25 19:15:48.730309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:104400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x187c00 00:25:06.531 [2024-07-25 19:15:48.730315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.531 [2024-07-25 19:15:48.730323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x187c00 00:25:06.531 [2024-07-25 19:15:48.730329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.531 [2024-07-25 19:15:48.730337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:104416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x187c00 00:25:06.531 [2024-07-25 19:15:48.730344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.531 [2024-07-25 19:15:48.730352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:104424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x187c00 00:25:06.531 [2024-07-25 19:15:48.730359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.531 [2024-07-25 19:15:48.730368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:104432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x187c00 00:25:06.531 [2024-07-25 19:15:48.730374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.531 [2024-07-25 19:15:48.730383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x187c00 00:25:06.531 [2024-07-25 19:15:48.730389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.531 [2024-07-25 19:15:48.730398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:104448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x187c00 00:25:06.531 [2024-07-25 19:15:48.730405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.531 [2024-07-25 19:15:48.730412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x187c00 00:25:06.531 [2024-07-25 19:15:48.730419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.531 [2024-07-25 19:15:48.730427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:104464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x187c00 00:25:06.531 [2024-07-25 19:15:48.730433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.531 [2024-07-25 19:15:48.730442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x187c00 00:25:06.531 [2024-07-25 19:15:48.730451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.531 [2024-07-25 19:15:48.730459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:104480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x187c00 00:25:06.531 [2024-07-25 19:15:48.730465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.531 [2024-07-25 19:15:48.730473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:104488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x187c00 00:25:06.531 [2024-07-25 19:15:48.730480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.531 [2024-07-25 19:15:48.730488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:104880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.531 [2024-07-25 19:15:48.730494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.531 [2024-07-25 19:15:48.730502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.531 [2024-07-25 19:15:48.730509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.531 [2024-07-25 19:15:48.730517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:104896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.531 [2024-07-25 19:15:48.730523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.531 [2024-07-25 19:15:48.730531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.531 [2024-07-25 19:15:48.730537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.531 [2024-07-25 19:15:48.730547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:104912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.531 [2024-07-25 19:15:48.730553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.531 [2024-07-25 19:15:48.730562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.531 [2024-07-25 19:15:48.730568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.531 [2024-07-25 19:15:48.730576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:104928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.531 [2024-07-25 19:15:48.730582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.531 [2024-07-25 19:15:48.730590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:104936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.531 [2024-07-25 19:15:48.730597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.531 [2024-07-25 19:15:48.730605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:104496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x187c00 00:25:06.531 [2024-07-25 19:15:48.730611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.531 [2024-07-25 19:15:48.730619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:104504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x187c00 00:25:06.531 [2024-07-25 19:15:48.730626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.531 [2024-07-25 19:15:48.730634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:104512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x187c00 00:25:06.531 [2024-07-25 19:15:48.730640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.531 [2024-07-25 19:15:48.730648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:104520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x187c00 00:25:06.531 [2024-07-25 19:15:48.730655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.531 [2024-07-25 19:15:48.730663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:104528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x187c00 00:25:06.531 [2024-07-25 19:15:48.730669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.531 [2024-07-25 19:15:48.730677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:104536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x187c00 00:25:06.531 [2024-07-25 19:15:48.730685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.531 [2024-07-25 19:15:48.731864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.531 [2024-07-25 19:15:48.731875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.531 [2024-07-25 19:15:48.731882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104544 len:8 PRP1 0x0 PRP2 0x0 00:25:06.531 [2024-07-25 19:15:48.731889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.531 [2024-07-25 19:15:48.731934] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:25:06.531 [2024-07-25 19:15:48.731944] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:25:06.531 [2024-07-25 19:15:48.731951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:06.531 [2024-07-25 19:15:48.734836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:06.531 [2024-07-25 19:15:48.749069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:06.531 [2024-07-25 19:15:48.796984] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:06.531 [2024-07-25 19:15:53.112723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x187c00 00:25:06.531 [2024-07-25 19:15:53.112757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.531 [2024-07-25 19:15:53.112773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:59312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x187c00 00:25:06.531 [2024-07-25 19:15:53.112780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.531 [2024-07-25 19:15:53.112789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:59320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x187c00 00:25:06.531 [2024-07-25 19:15:53.112797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.531 [2024-07-25 19:15:53.112805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x187c00 00:25:06.531 [2024-07-25 19:15:53.112812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.531 [2024-07-25 19:15:53.112820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:59336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x187c00 00:25:06.531 [2024-07-25 19:15:53.112827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.532 [2024-07-25 19:15:53.112835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:59344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x187c00 00:25:06.532 [2024-07-25 19:15:53.112842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.532 [2024-07-25 19:15:53.112850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:59352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x187c00 00:25:06.532 [2024-07-25 19:15:53.112857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.532 [2024-07-25 19:15:53.112865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:59360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x187c00 00:25:06.532 [2024-07-25 19:15:53.112872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.532 [2024-07-25 19:15:53.112880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.532 [2024-07-25 19:15:53.112887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.532 [2024-07-25 19:15:53.112909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:59568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.532 [2024-07-25 19:15:53.112916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.532 [2024-07-25 19:15:53.112925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.532 [2024-07-25 19:15:53.112931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.532 [2024-07-25 19:15:53.112939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.532 [2024-07-25 19:15:53.112946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.532 [2024-07-25 19:15:53.112954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:59592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.532 [2024-07-25 19:15:53.112960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.532 [2024-07-25 19:15:53.112968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.532 [2024-07-25 19:15:53.112974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.532 [2024-07-25 19:15:53.112982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.532 [2024-07-25 19:15:53.112989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.532 [2024-07-25 19:15:53.112997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.532 [2024-07-25 19:15:53.113003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.532 [2024-07-25 19:15:53.113012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.532 [2024-07-25 19:15:53.113018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.532 [2024-07-25 19:15:53.113027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.532 [2024-07-25 19:15:53.113033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.532 [2024-07-25 19:15:53.113042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.532 [2024-07-25 19:15:53.113048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.532 [2024-07-25 19:15:53.113055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.532 [2024-07-25 19:15:53.113062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.532 [2024-07-25 19:15:53.113070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:59656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.532 [2024-07-25 19:15:53.113078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.532 [2024-07-25 19:15:53.113086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.532 [2024-07-25 19:15:53.113094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.532 [2024-07-25 19:15:53.113102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.532 [2024-07-25 19:15:53.113108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.532 [2024-07-25 19:15:53.113117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.532 [2024-07-25 19:15:53.113123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.532 [2024-07-25 19:15:53.113131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.532 [2024-07-25 19:15:53.113138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.532 [2024-07-25 19:15:53.113145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.532 [2024-07-25 19:15:53.113152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.532 [2024-07-25 19:15:53.113160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:59704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.532 [2024-07-25 19:15:53.113167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.532 [2024-07-25 19:15:53.113175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.532 [2024-07-25 19:15:53.113181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.532 [2024-07-25 19:15:53.113189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:59720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.532 [2024-07-25 19:15:53.113195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.532 [2024-07-25 19:15:53.113203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.532 [2024-07-25 19:15:53.113209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.532 [2024-07-25 19:15:53.113217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.532 [2024-07-25 19:15:53.113224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.532 [2024-07-25 19:15:53.113232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.532 [2024-07-25 19:15:53.113238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.532 [2024-07-25 19:15:53.113246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:59368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x187c00 00:25:06.532 [2024-07-25 19:15:53.113253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.532 [2024-07-25 19:15:53.113262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:59376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x187c00 00:25:06.532 [2024-07-25 19:15:53.113271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.532 [2024-07-25 19:15:53.113279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.532 [2024-07-25 19:15:53.113286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.532 [2024-07-25 19:15:53.113293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.532 [2024-07-25 19:15:53.113300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.532 [2024-07-25 19:15:53.113308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.532 [2024-07-25 19:15:53.113314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.532 [2024-07-25 19:15:53.113322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.532 [2024-07-25 19:15:53.113328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.532 [2024-07-25 19:15:53.113336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.532 [2024-07-25 19:15:53.113342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.533 [2024-07-25 19:15:53.113350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:59792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.533 [2024-07-25 19:15:53.113356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.533 [2024-07-25 19:15:53.113364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.533 [2024-07-25 19:15:53.113371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.533 [2024-07-25 19:15:53.113379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:59808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.533 [2024-07-25 19:15:53.113385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.533 [2024-07-25 19:15:53.113393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.533 [2024-07-25 19:15:53.113399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.533 [2024-07-25 19:15:53.113407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.533 [2024-07-25 19:15:53.113414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.533 [2024-07-25 19:15:53.113422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.533 [2024-07-25 19:15:53.113428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.533 [2024-07-25 19:15:53.113436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:59840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.533 [2024-07-25 19:15:53.113442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.533 [2024-07-25 19:15:53.113451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:59848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.533 [2024-07-25 19:15:53.113458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.533 [2024-07-25 19:15:53.113466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:59856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.533 [2024-07-25 19:15:53.113472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.533 [2024-07-25 19:15:53.113480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.533 [2024-07-25 19:15:53.113486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.533 [2024-07-25 19:15:53.113495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:59872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.533 [2024-07-25 19:15:53.113501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.533 [2024-07-25 19:15:53.113509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.533 [2024-07-25 19:15:53.113515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.533 [2024-07-25 19:15:53.113523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:59888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.533 [2024-07-25 19:15:53.113530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.533 [2024-07-25 19:15:53.113538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.533 [2024-07-25 19:15:53.113544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.533 [2024-07-25 19:15:53.113552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.533 [2024-07-25 19:15:53.113559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.533 [2024-07-25 19:15:53.113567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:59912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.533 [2024-07-25 19:15:53.113574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.533 [2024-07-25 19:15:53.113582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.533 [2024-07-25 19:15:53.113588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.533 [2024-07-25 19:15:53.113596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.533 [2024-07-25 19:15:53.113603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.533 [2024-07-25 19:15:53.113611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.533 [2024-07-25 19:15:53.113617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.533 [2024-07-25 19:15:53.113625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.533 [2024-07-25 19:15:53.113633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.533 [2024-07-25 19:15:53.113641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.533 [2024-07-25 19:15:53.113648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.533 [2024-07-25 19:15:53.113655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.533 [2024-07-25 19:15:53.113662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.533 [2024-07-25 19:15:53.113669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.533 [2024-07-25 19:15:53.113676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.533 [2024-07-25 19:15:53.113684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.533 [2024-07-25 19:15:53.113690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.533 [2024-07-25 19:15:53.113698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.533 [2024-07-25 19:15:53.113705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.533 [2024-07-25 19:15:53.113712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.533 [2024-07-25 19:15:53.113719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.533 [2024-07-25 19:15:53.113727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.533 [2024-07-25 19:15:53.113733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.533 [2024-07-25 19:15:53.113741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:59384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x187c00 00:25:06.533 [2024-07-25 19:15:53.113748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.533 [2024-07-25 19:15:53.113756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x187c00 00:25:06.533 [2024-07-25 19:15:53.113763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.533 [2024-07-25 19:15:53.113771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:59400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x187c00 00:25:06.533 [2024-07-25 19:15:53.113777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.533 [2024-07-25 19:15:53.113785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:59408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x187c00 00:25:06.533 [2024-07-25 19:15:53.113792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.533 [2024-07-25 19:15:53.113800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:59416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x187c00 00:25:06.534 [2024-07-25 19:15:53.113808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.534 [2024-07-25 19:15:53.113817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:59424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x187c00 00:25:06.534 [2024-07-25 19:15:53.113823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.534 [2024-07-25 19:15:53.113831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.534 [2024-07-25 19:15:53.113837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.534 [2024-07-25 19:15:53.113846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.534 [2024-07-25 19:15:53.113852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.534 [2024-07-25 19:15:53.113860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.534 [2024-07-25 19:15:53.113866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.534 [2024-07-25 19:15:53.113874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.534 [2024-07-25 19:15:53.113881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.534 [2024-07-25 19:15:53.113889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.534 [2024-07-25 19:15:53.113896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.534 [2024-07-25 19:15:53.113908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.534 [2024-07-25 19:15:53.113914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.534 [2024-07-25 19:15:53.113923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.534 [2024-07-25 19:15:53.113929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.534 [2024-07-25 19:15:53.113937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.534 [2024-07-25 19:15:53.113944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.534 [2024-07-25 19:15:53.113951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x187c00 00:25:06.534 [2024-07-25 19:15:53.113958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.534 [2024-07-25 19:15:53.113967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x187c00 00:25:06.534 [2024-07-25 19:15:53.113974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.534 [2024-07-25 19:15:53.113982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:59448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x187c00 00:25:06.534 [2024-07-25 19:15:53.113990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.534 [2024-07-25 19:15:53.113998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x187c00 00:25:06.534 [2024-07-25 19:15:53.114005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.534 [2024-07-25 19:15:53.114013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x187c00 00:25:06.534 [2024-07-25 19:15:53.114020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.534 [2024-07-25 19:15:53.114028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:59472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x187c00 00:25:06.534 [2024-07-25 19:15:53.114035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.534 [2024-07-25 19:15:53.114043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:59480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x187c00 00:25:06.534 [2024-07-25 19:15:53.114049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.534 [2024-07-25 19:15:53.114058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:59488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x187c00 00:25:06.534 [2024-07-25 19:15:53.114065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.534 [2024-07-25 19:15:53.114073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:59496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x187c00 00:25:06.534 [2024-07-25 19:15:53.114079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.534 [2024-07-25 19:15:53.114088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:59504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x187c00 00:25:06.534 [2024-07-25 19:15:53.114094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.534 [2024-07-25 19:15:53.114102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x187c00 00:25:06.534 [2024-07-25 19:15:53.114108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.534 [2024-07-25 19:15:53.114117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:59520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x187c00 00:25:06.534 [2024-07-25 19:15:53.114123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.534 [2024-07-25 19:15:53.114132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x187c00 00:25:06.534 [2024-07-25 19:15:53.114138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.534 [2024-07-25 19:15:53.114146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x187c00 00:25:06.534 [2024-07-25 19:15:53.114154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.534 [2024-07-25 19:15:53.114162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:59544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x187c00 00:25:06.534 [2024-07-25 19:15:53.114169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.534 [2024-07-25 19:15:53.114177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:59552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x187c00 00:25:06.534 [2024-07-25 19:15:53.114184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.534 [2024-07-25 19:15:53.114192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.534 [2024-07-25 19:15:53.114201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.534 [2024-07-25 19:15:53.114210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.534 [2024-07-25 19:15:53.114216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.534 [2024-07-25 19:15:53.114225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.534 [2024-07-25 19:15:53.114231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.534 [2024-07-25 19:15:53.114239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.534 [2024-07-25 19:15:53.114246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.534 [2024-07-25 19:15:53.114253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.534 [2024-07-25 19:15:53.114260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.534 [2024-07-25 19:15:53.114267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.534 [2024-07-25 19:15:53.114274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.534 [2024-07-25 19:15:53.114282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.534 [2024-07-25 19:15:53.114288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.535 [2024-07-25 19:15:53.114296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.535 [2024-07-25 19:15:53.114303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.535 [2024-07-25 19:15:53.114311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.535 [2024-07-25 19:15:53.114317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.535 [2024-07-25 19:15:53.114325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.535 [2024-07-25 19:15:53.114332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.535 [2024-07-25 19:15:53.114341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.535 [2024-07-25 19:15:53.114347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.535 [2024-07-25 19:15:53.114355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.535 [2024-07-25 19:15:53.114361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.535 [2024-07-25 19:15:53.114369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.535 [2024-07-25 19:15:53.114376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.535 [2024-07-25 19:15:53.114384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.535 [2024-07-25 19:15:53.114390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.535 [2024-07-25 19:15:53.114398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.535 [2024-07-25 19:15:53.114407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.535 [2024-07-25 19:15:53.114415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.535 [2024-07-25 19:15:53.114421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.535 [2024-07-25 19:15:53.114429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.535 [2024-07-25 19:15:53.114437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.535 [2024-07-25 19:15:53.114445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.535 [2024-07-25 19:15:53.114451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.535 [2024-07-25 19:15:53.114459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.535 [2024-07-25 19:15:53.114465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.535 [2024-07-25 19:15:53.114473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.535 [2024-07-25 19:15:53.114480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.535 [2024-07-25 19:15:53.114488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.535 [2024-07-25 19:15:53.114494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.535 [2024-07-25 19:15:53.114503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.535 [2024-07-25 19:15:53.114509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.535 [2024-07-25 19:15:53.114517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.535 [2024-07-25 19:15:53.114524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.535 [2024-07-25 19:15:53.114532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.535 [2024-07-25 19:15:53.114539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.535 [2024-07-25 19:15:53.114546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.535 [2024-07-25 19:15:53.114553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.535 [2024-07-25 19:15:53.114560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.535 [2024-07-25 19:15:53.114567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.535 [2024-07-25 19:15:53.114574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.535 [2024-07-25 19:15:53.114581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.535 [2024-07-25 19:15:53.114589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.535 [2024-07-25 19:15:53.114595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.535 [2024-07-25 19:15:53.114603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.535 [2024-07-25 19:15:53.114609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.535 [2024-07-25 19:15:53.114617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.535 [2024-07-25 19:15:53.114623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.535 [2024-07-25 19:15:53.114631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.535 [2024-07-25 19:15:53.114640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a5efe000 sqhd:52b0 p:0 m:0 dnr:0 00:25:06.535 [2024-07-25 19:15:53.115959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.535 [2024-07-25 19:15:53.115971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.535 [2024-07-25 19:15:53.115978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60320 len:8 PRP1 0x0 PRP2 0x0 00:25:06.535 [2024-07-25 19:15:53.115988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.535 [2024-07-25 19:15:53.116027] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:25:06.535 [2024-07-25 19:15:53.116036] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:25:06.535 [2024-07-25 19:15:53.116045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:06.535 [2024-07-25 19:15:53.118916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:06.535 [2024-07-25 19:15:53.133043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:06.535 [2024-07-25 19:15:53.181616] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:06.535 00:25:06.535 Latency(us) 00:25:06.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.535 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:06.535 Verification LBA range: start 0x0 length 0x4000 00:25:06.535 NVMe0n1 : 15.01 13824.71 54.00 316.48 0.00 9028.29 366.86 1043105.17 00:25:06.535 =================================================================================================================== 00:25:06.535 Total : 13824.71 54.00 316.48 0.00 9028.29 366.86 1043105.17 00:25:06.535 Received shutdown signal, test time was about 15.000000 seconds 00:25:06.535 00:25:06.535 Latency(us) 00:25:06.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.535 =================================================================================================================== 00:25:06.535 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:06.535 19:15:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:06.535 19:15:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:06.535 19:15:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:06.535 19:15:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=873438 00:25:06.535 19:15:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:06.535 19:15:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 873438 /var/tmp/bdevperf.sock 00:25:06.535 19:15:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 873438 ']' 00:25:06.535 19:15:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:06.535 19:15:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:06.535 19:15:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:06.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:06.535 19:15:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:06.536 19:15:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:07.229 19:15:59 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:07.229 19:15:59 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:07.229 19:15:59 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:25:07.229 [2024-07-25 19:15:59.516261] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:25:07.229 19:15:59 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:25:07.488 [2024-07-25 19:15:59.704881] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:25:07.488 19:15:59 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:07.746 NVMe0n1 00:25:07.746 19:16:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:08.005 00:25:08.005 19:16:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:08.264 00:25:08.264 19:16:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:08.264 19:16:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:08.264 19:16:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:08.523 19:16:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:11.811 19:16:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:11.811 19:16:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:11.811 19:16:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=874386 00:25:11.811 19:16:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:11.811 19:16:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 874386 00:25:12.746 0 00:25:13.005 19:16:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:13.005 [2024-07-25 19:15:58.509780] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:13.005 [2024-07-25 19:15:58.509830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid873438 ] 00:25:13.005 EAL: No free 2048 kB hugepages reported on node 1 00:25:13.005 [2024-07-25 19:15:58.580967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.005 [2024-07-25 19:15:58.649103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.005 [2024-07-25 19:16:00.877628] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:25:13.005 [2024-07-25 19:16:00.878169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.005 [2024-07-25 19:16:00.878200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.005 [2024-07-25 19:16:00.899988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:13.005 [2024-07-25 19:16:00.916338] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:13.005 Running I/O for 1 seconds... 00:25:13.005 00:25:13.005 Latency(us) 00:25:13.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.005 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:13.005 Verification LBA range: start 0x0 length 0x4000 00:25:13.006 NVMe0n1 : 1.01 17548.84 68.55 0.00 0.00 7253.19 2706.92 10827.69 00:25:13.006 =================================================================================================================== 00:25:13.006 Total : 17548.84 68.55 0.00 0.00 7253.19 2706.92 10827.69 00:25:13.006 19:16:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:13.006 19:16:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:13.006 19:16:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:13.264 19:16:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:13.264 19:16:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:13.523 19:16:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:13.781 19:16:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:17.070 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:17.070 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:17.070 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 873438 00:25:17.070 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 873438 ']' 00:25:17.070 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 873438 00:25:17.070 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:17.070 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:17.070 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 873438 00:25:17.070 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:17.070 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:17.070 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 873438' 00:25:17.070 killing process with pid 873438 00:25:17.070 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 873438 00:25:17.070 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 873438 00:25:17.070 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:17.070 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:17.329 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:17.329 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:17.329 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:17.329 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:17.329 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:17.329 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:17.329 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:17.329 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:17.329 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:17.329 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:17.329 rmmod nvme_rdma 00:25:17.329 rmmod nvme_fabrics 00:25:17.329 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:17.329 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:17.329 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:17.329 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 870360 ']' 00:25:17.329 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 870360 00:25:17.329 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 870360 ']' 00:25:17.329 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 870360 00:25:17.329 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:17.329 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:17.329 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 870360 00:25:17.329 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:17.329 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:17.329 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 870360' 00:25:17.329 killing process with pid 870360 00:25:17.329 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 870360 00:25:17.329 19:16:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 870360 00:25:17.588 19:16:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:25:17.847 00:25:17.847 real 0m36.869s 00:25:17.847 user 2m6.155s 00:25:17.847 sys 0m6.275s 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:17.847 ************************************ 00:25:17.847 END TEST nvmf_failover 00:25:17.847 ************************************ 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.847 ************************************ 00:25:17.847 START TEST nvmf_host_discovery 00:25:17.847 ************************************ 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:25:17.847 * Looking for test storage... 00:25:17.847 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:17.847 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:17.848 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:17.848 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:17.848 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:17.848 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:25:17.848 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:25:17.848 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:25:17.848 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:25:17.848 00:25:17.848 real 0m0.119s 00:25:17.848 user 0m0.057s 00:25:17.848 sys 0m0.070s 00:25:17.848 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:17.848 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.848 ************************************ 00:25:17.848 END TEST nvmf_host_discovery 00:25:17.848 ************************************ 00:25:17.848 19:16:10 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:25:17.848 19:16:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:17.848 19:16:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:17.848 19:16:10 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.848 ************************************ 00:25:17.848 START TEST nvmf_host_multipath_status 00:25:17.848 ************************************ 00:25:17.848 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:25:18.107 * Looking for test storage... 00:25:18.107 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:18.107 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:18.107 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:18.107 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:18.107 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:18.107 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:18.107 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:18.107 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:18.107 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:18.107 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:18.107 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:18.107 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:18.107 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:18.107 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:25:18.107 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:25:18.107 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:18.107 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:18.107 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:18.107 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:18.107 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:18.107 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:18.107 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:18.107 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:18.107 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.108 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.108 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.108 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:18.108 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.108 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:25:18.108 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:18.108 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:18.108 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:18.108 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:18.108 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:18.108 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:18.108 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:18.108 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:18.108 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:18.108 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:18.108 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:25:18.108 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:25:18.108 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:18.108 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:18.108 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:18.108 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:25:18.108 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:18.108 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:18.108 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:18.108 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:18.108 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.108 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:18.108 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.108 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:18.108 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:18.108 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:25:18.108 19:16:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:25:24.679 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:25:24.679 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:25:24.679 Found net devices under 0000:af:00.0: mlx_0_0 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:25:24.679 Found net devices under 0000:af:00.1: mlx_0_1 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # rdma_device_init 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # uname 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # modprobe ib_cm 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@63 -- # modprobe ib_core 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@64 -- # modprobe ib_umad 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe iw_cm 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # allocate_nic_ips 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # get_rdma_if_list 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:25:24.679 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:25:24.680 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:24.680 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:25:24.680 altname enp175s0f0np0 00:25:24.680 altname ens801f0np0 00:25:24.680 inet 192.168.100.8/24 scope global mlx_0_0 00:25:24.680 valid_lft forever preferred_lft forever 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:25:24.680 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:24.680 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:25:24.680 altname enp175s0f1np1 00:25:24.680 altname ens801f1np1 00:25:24.680 inet 192.168.100.9/24 scope global mlx_0_1 00:25:24.680 valid_lft forever preferred_lft forever 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # get_rdma_if_list 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:25:24.680 192.168.100.9' 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:25:24.680 192.168.100.9' 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # head -n 1 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:25:24.680 192.168.100.9' 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # tail -n +2 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # head -n 1 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=878689 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 878689 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 878689 ']' 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:24.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:24.680 19:16:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:24.680 [2024-07-25 19:16:16.300852] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:24.680 [2024-07-25 19:16:16.300897] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:24.680 EAL: No free 2048 kB hugepages reported on node 1 00:25:24.680 [2024-07-25 19:16:16.369962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:24.680 [2024-07-25 19:16:16.447226] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:24.680 [2024-07-25 19:16:16.447259] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:24.680 [2024-07-25 19:16:16.447269] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:24.680 [2024-07-25 19:16:16.447275] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:24.680 [2024-07-25 19:16:16.447280] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:24.680 [2024-07-25 19:16:16.447337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:24.680 [2024-07-25 19:16:16.447338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.680 19:16:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:24.680 19:16:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:24.680 19:16:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:24.680 19:16:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:24.680 19:16:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:24.939 19:16:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:24.939 19:16:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=878689 00:25:24.939 19:16:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:24.939 [2024-07-25 19:16:17.366690] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x100d720/0x1011c10) succeed. 00:25:24.939 [2024-07-25 19:16:17.375642] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x100ec20/0x10532b0) succeed. 00:25:25.198 19:16:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:25.456 Malloc0 00:25:25.456 19:16:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:25.456 19:16:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:25.715 19:16:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:25.973 [2024-07-25 19:16:18.251488] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:25.973 19:16:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:25:26.232 [2024-07-25 19:16:18.447922] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:25:26.232 19:16:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:26.232 19:16:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=878957 00:25:26.232 19:16:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:26.232 19:16:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 878957 /var/tmp/bdevperf.sock 00:25:26.232 19:16:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 878957 ']' 00:25:26.232 19:16:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:26.232 19:16:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:26.232 19:16:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:26.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:26.232 19:16:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:26.232 19:16:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:27.169 19:16:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:27.169 19:16:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:27.169 19:16:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:27.169 19:16:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:27.427 Nvme0n1 00:25:27.428 19:16:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:27.686 Nvme0n1 00:25:27.686 19:16:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:27.686 19:16:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:30.218 19:16:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:30.218 19:16:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:25:30.218 19:16:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:25:30.218 19:16:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:31.153 19:16:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:31.153 19:16:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:31.153 19:16:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.153 19:16:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:31.410 19:16:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:31.410 19:16:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:31.410 19:16:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.410 19:16:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:31.668 19:16:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:31.668 19:16:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:31.668 19:16:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.668 19:16:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:31.668 19:16:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:31.668 19:16:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:31.668 19:16:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.668 19:16:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:31.925 19:16:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:31.925 19:16:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:31.925 19:16:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.926 19:16:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:32.184 19:16:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.184 19:16:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:32.184 19:16:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.184 19:16:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:32.442 19:16:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.442 19:16:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:32.442 19:16:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:25:32.442 19:16:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:25:32.699 19:16:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:34.075 19:16:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:34.075 19:16:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:34.075 19:16:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.075 19:16:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:34.075 19:16:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:34.075 19:16:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:34.075 19:16:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.075 19:16:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:34.075 19:16:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.075 19:16:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:34.075 19:16:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.075 19:16:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:34.333 19:16:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.333 19:16:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:34.333 19:16:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.333 19:16:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:34.591 19:16:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.591 19:16:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:34.591 19:16:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.591 19:16:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:34.848 19:16:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.848 19:16:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:34.848 19:16:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.848 19:16:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:34.848 19:16:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.848 19:16:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:34.848 19:16:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:25:35.106 19:16:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:25:35.364 19:16:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:36.298 19:16:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:36.298 19:16:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:36.298 19:16:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.298 19:16:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:36.556 19:16:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.556 19:16:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:36.556 19:16:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.556 19:16:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:36.814 19:16:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:36.814 19:16:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:36.814 19:16:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:36.814 19:16:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.073 19:16:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.073 19:16:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:37.073 19:16:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.073 19:16:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:37.073 19:16:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.073 19:16:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:37.073 19:16:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.073 19:16:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:37.331 19:16:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.331 19:16:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:37.331 19:16:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.331 19:16:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:37.589 19:16:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.589 19:16:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:37.589 19:16:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:25:37.847 19:16:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:25:37.847 19:16:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:39.223 19:16:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:39.223 19:16:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:39.223 19:16:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.223 19:16:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:39.223 19:16:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.223 19:16:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:39.223 19:16:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.223 19:16:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:39.481 19:16:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:39.481 19:16:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:39.481 19:16:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.481 19:16:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:39.481 19:16:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.481 19:16:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:39.481 19:16:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.481 19:16:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:39.739 19:16:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.739 19:16:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:39.739 19:16:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.739 19:16:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:39.997 19:16:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.997 19:16:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:39.997 19:16:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.997 19:16:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:40.255 19:16:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:40.255 19:16:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:40.255 19:16:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:25:40.255 19:16:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:25:40.513 19:16:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:41.447 19:16:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:41.447 19:16:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:41.447 19:16:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.447 19:16:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:41.705 19:16:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:41.705 19:16:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:41.705 19:16:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.705 19:16:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:41.963 19:16:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:41.963 19:16:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:41.963 19:16:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.963 19:16:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:42.222 19:16:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.222 19:16:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:42.222 19:16:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.222 19:16:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:42.222 19:16:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.222 19:16:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:42.222 19:16:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.222 19:16:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:42.481 19:16:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:42.481 19:16:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:42.481 19:16:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.481 19:16:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:42.739 19:16:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:42.740 19:16:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:42.740 19:16:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:25:42.998 19:16:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:25:42.998 19:16:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:44.376 19:16:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:44.376 19:16:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:44.376 19:16:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.376 19:16:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:44.376 19:16:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:44.376 19:16:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:44.376 19:16:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:44.376 19:16:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.634 19:16:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.634 19:16:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:44.634 19:16:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.634 19:16:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:44.634 19:16:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.634 19:16:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:44.634 19:16:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.635 19:16:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:44.893 19:16:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.893 19:16:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:44.893 19:16:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.893 19:16:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:45.151 19:16:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:45.151 19:16:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:45.151 19:16:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.151 19:16:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:45.410 19:16:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.410 19:16:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:45.410 19:16:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:45.410 19:16:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:25:45.669 19:16:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:25:45.928 19:16:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:46.865 19:16:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:46.865 19:16:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:46.865 19:16:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.865 19:16:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:47.124 19:16:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.124 19:16:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:47.124 19:16:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.124 19:16:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:47.383 19:16:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.383 19:16:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:47.383 19:16:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.383 19:16:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:47.383 19:16:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.383 19:16:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:47.383 19:16:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.383 19:16:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:47.641 19:16:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.641 19:16:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:47.641 19:16:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.641 19:16:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:47.900 19:16:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.900 19:16:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:47.900 19:16:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.900 19:16:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:48.159 19:16:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.159 19:16:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:48.159 19:16:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:25:48.418 19:16:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:25:48.418 19:16:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:49.794 19:16:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:49.794 19:16:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:49.794 19:16:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.794 19:16:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:49.794 19:16:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:49.794 19:16:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:49.794 19:16:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.794 19:16:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:49.794 19:16:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.794 19:16:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:49.794 19:16:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.794 19:16:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:50.053 19:16:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.053 19:16:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:50.053 19:16:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.053 19:16:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:50.312 19:16:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.312 19:16:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:50.312 19:16:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.312 19:16:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:50.571 19:16:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.571 19:16:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:50.571 19:16:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.571 19:16:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:50.571 19:16:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.571 19:16:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:50.571 19:16:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:25:50.830 19:16:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:25:51.089 19:16:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:52.020 19:16:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:52.020 19:16:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:52.020 19:16:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.020 19:16:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:52.279 19:16:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.279 19:16:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:52.279 19:16:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:52.279 19:16:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.538 19:16:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.538 19:16:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:52.538 19:16:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.538 19:16:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:52.538 19:16:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.538 19:16:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:52.538 19:16:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.538 19:16:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:52.797 19:16:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.797 19:16:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:52.797 19:16:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.797 19:16:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:53.055 19:16:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.055 19:16:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:53.055 19:16:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.055 19:16:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:53.313 19:16:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.313 19:16:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:53.313 19:16:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:25:53.572 19:16:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:25:53.572 19:16:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:54.950 19:16:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:54.950 19:16:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:54.950 19:16:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.950 19:16:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:54.950 19:16:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.950 19:16:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:54.950 19:16:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.950 19:16:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:54.950 19:16:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:54.950 19:16:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:54.950 19:16:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.950 19:16:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:55.208 19:16:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.209 19:16:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:55.209 19:16:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.209 19:16:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:55.467 19:16:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.467 19:16:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:55.468 19:16:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:55.468 19:16:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.727 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.727 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:55.727 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.727 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:55.727 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:55.727 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 878957 00:25:55.727 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 878957 ']' 00:25:55.727 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 878957 00:25:55.986 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:25:55.986 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:55.986 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 878957 00:25:55.986 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:55.986 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:55.986 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 878957' 00:25:55.986 killing process with pid 878957 00:25:55.986 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 878957 00:25:55.986 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 878957 00:25:55.986 Connection closed with partial response: 00:25:55.986 00:25:55.986 00:25:56.248 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 878957 00:25:56.248 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:56.248 [2024-07-25 19:16:18.517245] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:56.248 [2024-07-25 19:16:18.517294] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid878957 ] 00:25:56.248 EAL: No free 2048 kB hugepages reported on node 1 00:25:56.248 [2024-07-25 19:16:18.583984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.248 [2024-07-25 19:16:18.656000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:56.248 Running I/O for 90 seconds... 00:25:56.248 [2024-07-25 19:16:32.658800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x188d00 00:25:56.248 [2024-07-25 19:16:32.658839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:56.248 [2024-07-25 19:16:32.658888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x188d00 00:25:56.248 [2024-07-25 19:16:32.658896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:56.248 [2024-07-25 19:16:32.658910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x188d00 00:25:56.248 [2024-07-25 19:16:32.658918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:56.248 [2024-07-25 19:16:32.658928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x188d00 00:25:56.248 [2024-07-25 19:16:32.658935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:56.248 [2024-07-25 19:16:32.658945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x188d00 00:25:56.248 [2024-07-25 19:16:32.658951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:56.248 [2024-07-25 19:16:32.658961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x188d00 00:25:56.248 [2024-07-25 19:16:32.658968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:56.248 [2024-07-25 19:16:32.658977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x188d00 00:25:56.248 [2024-07-25 19:16:32.658984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:56.248 [2024-07-25 19:16:32.658993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x188d00 00:25:56.248 [2024-07-25 19:16:32.659000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:56.248 [2024-07-25 19:16:32.659009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x188d00 00:25:56.248 [2024-07-25 19:16:32.659016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:56.248 [2024-07-25 19:16:32.659025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x188d00 00:25:56.248 [2024-07-25 19:16:32.659038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:56.248 [2024-07-25 19:16:32.659048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x188d00 00:25:56.248 [2024-07-25 19:16:32.659055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:56.248 [2024-07-25 19:16:32.659064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x188d00 00:25:56.248 [2024-07-25 19:16:32.659071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:56.249 [2024-07-25 19:16:32.659080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x188d00 00:25:56.249 [2024-07-25 19:16:32.659087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:56.249 [2024-07-25 19:16:32.659097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x188d00 00:25:56.249 [2024-07-25 19:16:32.659104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:56.249 [2024-07-25 19:16:32.659113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x188d00 00:25:56.249 [2024-07-25 19:16:32.659120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.249 [2024-07-25 19:16:32.659131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x188d00 00:25:56.249 [2024-07-25 19:16:32.659137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:56.249 [2024-07-25 19:16:32.659147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x188d00 00:25:56.249 [2024-07-25 19:16:32.659154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:56.249 [2024-07-25 19:16:32.659163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x188d00 00:25:56.249 [2024-07-25 19:16:32.659169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:56.249 [2024-07-25 19:16:32.659179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x188d00 00:25:56.249 [2024-07-25 19:16:32.659186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:56.249 [2024-07-25 19:16:32.659196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x188d00 00:25:56.249 [2024-07-25 19:16:32.659202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:56.249 [2024-07-25 19:16:32.659212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x188d00 00:25:56.249 [2024-07-25 19:16:32.659219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:56.249 [2024-07-25 19:16:32.659230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x188d00 00:25:56.249 [2024-07-25 19:16:32.659237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:56.249 [2024-07-25 19:16:32.659247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x188d00 00:25:56.249 [2024-07-25 19:16:32.659254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:56.249 [2024-07-25 19:16:32.659263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x188d00 00:25:56.249 [2024-07-25 19:16:32.659269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:56.249 [2024-07-25 19:16:32.659279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:78304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x188d00 00:25:56.249 [2024-07-25 19:16:32.659285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:56.249 [2024-07-25 19:16:32.659295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x188d00 00:25:56.249 [2024-07-25 19:16:32.659302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:56.249 [2024-07-25 19:16:32.659311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x188d00 00:25:56.249 [2024-07-25 19:16:32.659318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:56.249 [2024-07-25 19:16:32.659327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x188d00 00:25:56.249 [2024-07-25 19:16:32.659334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:56.249 [2024-07-25 19:16:32.659343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x188d00 00:25:56.249 [2024-07-25 19:16:32.659350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:56.249 [2024-07-25 19:16:32.659359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x188d00 00:25:56.249 [2024-07-25 19:16:32.659366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:56.249 [2024-07-25 19:16:32.659377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x188d00 00:25:56.249 [2024-07-25 19:16:32.659383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:56.249 [2024-07-25 19:16:32.659393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.249 [2024-07-25 19:16:32.659400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:56.249 [2024-07-25 19:16:32.659410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.249 [2024-07-25 19:16:32.659418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:56.249 [2024-07-25 19:16:32.659427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.249 [2024-07-25 19:16:32.659434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:56.249 [2024-07-25 19:16:32.659443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.249 [2024-07-25 19:16:32.659449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:56.249 [2024-07-25 19:16:32.659458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.249 [2024-07-25 19:16:32.659464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:56.249 [2024-07-25 19:16:32.659473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.249 [2024-07-25 19:16:32.659480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:56.249 [2024-07-25 19:16:32.659488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.249 [2024-07-25 19:16:32.659495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:56.249 [2024-07-25 19:16:32.659504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.249 [2024-07-25 19:16:32.659510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:56.249 [2024-07-25 19:16:32.659519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.249 [2024-07-25 19:16:32.659525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:56.249 [2024-07-25 19:16:32.659534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.249 [2024-07-25 19:16:32.659541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:56.249 [2024-07-25 19:16:32.659550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.249 [2024-07-25 19:16:32.659557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:56.249 [2024-07-25 19:16:32.659566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x188d00 00:25:56.249 [2024-07-25 19:16:32.659572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.659581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x188d00 00:25:56.250 [2024-07-25 19:16:32.659588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.659597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.250 [2024-07-25 19:16:32.659605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.659614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.250 [2024-07-25 19:16:32.659620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.659629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.250 [2024-07-25 19:16:32.659635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.659644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.250 [2024-07-25 19:16:32.659651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.659660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.250 [2024-07-25 19:16:32.659666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.659676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.250 [2024-07-25 19:16:32.659683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.659692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.250 [2024-07-25 19:16:32.659698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.659707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.250 [2024-07-25 19:16:32.659714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.659722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.250 [2024-07-25 19:16:32.659729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.659738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.250 [2024-07-25 19:16:32.659745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.659754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.250 [2024-07-25 19:16:32.659760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.659769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.250 [2024-07-25 19:16:32.659775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.659785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x188d00 00:25:56.250 [2024-07-25 19:16:32.659793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.659802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x188d00 00:25:56.250 [2024-07-25 19:16:32.659809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.659818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.250 [2024-07-25 19:16:32.659824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.659833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.250 [2024-07-25 19:16:32.659840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.659849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.250 [2024-07-25 19:16:32.659855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.659864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.250 [2024-07-25 19:16:32.659871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.659880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.250 [2024-07-25 19:16:32.659886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.659897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.250 [2024-07-25 19:16:32.659906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.659916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.250 [2024-07-25 19:16:32.659922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.659931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.250 [2024-07-25 19:16:32.659938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.659948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.250 [2024-07-25 19:16:32.659954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.659963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.250 [2024-07-25 19:16:32.659970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.659979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.250 [2024-07-25 19:16:32.659987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.659996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.250 [2024-07-25 19:16:32.660002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.660011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.250 [2024-07-25 19:16:32.660018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.660027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.250 [2024-07-25 19:16:32.660033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.660042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.250 [2024-07-25 19:16:32.660049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.660058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.250 [2024-07-25 19:16:32.660064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.660073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.250 [2024-07-25 19:16:32.660079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.660089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.250 [2024-07-25 19:16:32.660095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.660105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.250 [2024-07-25 19:16:32.660111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.660120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.250 [2024-07-25 19:16:32.660126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.660135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.250 [2024-07-25 19:16:32.660141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.250 [2024-07-25 19:16:32.660151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.251 [2024-07-25 19:16:32.660157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:56.251 [2024-07-25 19:16:32.660166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.251 [2024-07-25 19:16:32.660173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:56.251 [2024-07-25 19:16:32.660183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.251 [2024-07-25 19:16:32.660190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:56.251 [2024-07-25 19:16:32.660199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.251 [2024-07-25 19:16:32.660206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:56.251 [2024-07-25 19:16:32.660215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.251 [2024-07-25 19:16:32.660222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:56.251 [2024-07-25 19:16:32.660231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.251 [2024-07-25 19:16:32.660237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:56.251 [2024-07-25 19:16:32.660246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.251 [2024-07-25 19:16:32.660252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:56.251 [2024-07-25 19:16:32.660262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x188d00 00:25:56.251 [2024-07-25 19:16:32.660268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:56.251 [2024-07-25 19:16:32.660277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.251 [2024-07-25 19:16:32.660284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:56.251 [2024-07-25 19:16:32.660292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.251 [2024-07-25 19:16:32.660299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:56.251 [2024-07-25 19:16:32.660308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x188d00 00:25:56.251 [2024-07-25 19:16:32.660315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:56.251 [2024-07-25 19:16:32.660324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x188d00 00:25:56.251 [2024-07-25 19:16:32.660330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:56.251 [2024-07-25 19:16:32.660339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x188d00 00:25:56.251 [2024-07-25 19:16:32.660345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:56.251 [2024-07-25 19:16:32.660355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x188d00 00:25:56.251 [2024-07-25 19:16:32.660369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:56.251 [2024-07-25 19:16:32.660379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x188d00 00:25:56.251 [2024-07-25 19:16:32.660385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:56.251 [2024-07-25 19:16:32.660395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x188d00 00:25:56.251 [2024-07-25 19:16:32.660401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:56.251 [2024-07-25 19:16:32.661149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x188d00 00:25:56.251 [2024-07-25 19:16:32.661158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:56.251 [2024-07-25 19:16:32.661176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x188d00 00:25:56.251 [2024-07-25 19:16:32.661183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:56.251 [2024-07-25 19:16:32.661199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x188d00 00:25:56.251 [2024-07-25 19:16:32.661205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.251 [2024-07-25 19:16:32.661221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x188d00 00:25:56.251 [2024-07-25 19:16:32.661228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:56.251 [2024-07-25 19:16:32.661243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x188d00 00:25:56.251 [2024-07-25 19:16:32.661249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:56.251 [2024-07-25 19:16:32.661264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x188d00 00:25:56.251 [2024-07-25 19:16:32.661271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:56.251 [2024-07-25 19:16:32.661286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x188d00 00:25:56.251 [2024-07-25 19:16:32.661293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:56.251 [2024-07-25 19:16:32.661308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x188d00 00:25:56.251 [2024-07-25 19:16:32.661315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:56.251 [2024-07-25 19:16:32.661330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x188d00 00:25:56.251 [2024-07-25 19:16:32.661336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:56.251 [2024-07-25 19:16:32.661353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x188d00 00:25:56.251 [2024-07-25 19:16:32.661360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:56.251 [2024-07-25 19:16:32.661375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x188d00 00:25:56.251 [2024-07-25 19:16:32.661382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:56.251 [2024-07-25 19:16:32.661397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x188d00 00:25:56.251 [2024-07-25 19:16:32.661404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:56.251 [2024-07-25 19:16:32.661419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x188d00 00:25:56.251 [2024-07-25 19:16:32.661425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:56.251 [2024-07-25 19:16:32.661441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x188d00 00:25:56.251 [2024-07-25 19:16:32.661449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:56.251 [2024-07-25 19:16:32.661464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x188d00 00:25:56.251 [2024-07-25 19:16:32.661471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.251 [2024-07-25 19:16:32.661487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x188d00 00:25:56.251 [2024-07-25 19:16:32.661493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.251 [2024-07-25 19:16:32.661509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x188d00 00:25:56.252 [2024-07-25 19:16:32.661516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:32.661531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x188d00 00:25:56.252 [2024-07-25 19:16:32.661537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:32.661553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x188d00 00:25:56.252 [2024-07-25 19:16:32.661559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:32.661575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x188d00 00:25:56.252 [2024-07-25 19:16:32.661581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:32.661596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x188d00 00:25:56.252 [2024-07-25 19:16:32.661605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:32.661620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x188d00 00:25:56.252 [2024-07-25 19:16:32.661627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:32.661642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x188d00 00:25:56.252 [2024-07-25 19:16:32.661649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:32.661665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x188d00 00:25:56.252 [2024-07-25 19:16:32.661672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:32.661687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x188d00 00:25:56.252 [2024-07-25 19:16:32.661693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:32.661709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x188d00 00:25:56.252 [2024-07-25 19:16:32.661716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:32.661731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x188d00 00:25:56.252 [2024-07-25 19:16:32.661737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:32.661753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x188d00 00:25:56.252 [2024-07-25 19:16:32.661760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:32.661775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x188d00 00:25:56.252 [2024-07-25 19:16:32.661782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:32.661796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x188d00 00:25:56.252 [2024-07-25 19:16:32.661804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:32.661820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x188d00 00:25:56.252 [2024-07-25 19:16:32.661827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:32.661842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x188d00 00:25:56.252 [2024-07-25 19:16:32.661850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:32.661865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x188d00 00:25:56.252 [2024-07-25 19:16:32.661872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:45.965392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.252 [2024-07-25 19:16:45.965430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:45.965477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:120120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x188d00 00:25:56.252 [2024-07-25 19:16:45.965486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:45.965497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:120128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x188d00 00:25:56.252 [2024-07-25 19:16:45.965504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:45.965514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x188d00 00:25:56.252 [2024-07-25 19:16:45.965521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:45.965530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.252 [2024-07-25 19:16:45.965537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:45.965546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.252 [2024-07-25 19:16:45.965553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:45.965563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x188d00 00:25:56.252 [2024-07-25 19:16:45.965570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:45.965579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.252 [2024-07-25 19:16:45.965586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:45.965596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x188d00 00:25:56.252 [2024-07-25 19:16:45.965602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:45.965612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:120240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x188d00 00:25:56.252 [2024-07-25 19:16:45.965619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:45.965633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x188d00 00:25:56.252 [2024-07-25 19:16:45.965640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:45.965649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.252 [2024-07-25 19:16:45.965656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:45.965665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.252 [2024-07-25 19:16:45.965672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:45.965681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.252 [2024-07-25 19:16:45.965688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:45.965697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.252 [2024-07-25 19:16:45.965703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:45.965712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x188d00 00:25:56.252 [2024-07-25 19:16:45.965719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:45.965728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x188d00 00:25:56.252 [2024-07-25 19:16:45.965735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:45.966074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.252 [2024-07-25 19:16:45.966085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:56.252 [2024-07-25 19:16:45.966096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:120176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x188d00 00:25:56.253 [2024-07-25 19:16:45.966103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:56.253 [2024-07-25 19:16:45.966113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x188d00 00:25:56.253 [2024-07-25 19:16:45.966119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:56.253 [2024-07-25 19:16:45.966128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.253 [2024-07-25 19:16:45.966135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:56.253 [2024-07-25 19:16:45.966144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.253 [2024-07-25 19:16:45.966150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:56.253 [2024-07-25 19:16:45.966163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.253 [2024-07-25 19:16:45.966170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:56.253 [2024-07-25 19:16:45.966180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x188d00 00:25:56.253 [2024-07-25 19:16:45.966187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:56.253 [2024-07-25 19:16:45.966196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.253 [2024-07-25 19:16:45.966203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:56.253 [2024-07-25 19:16:45.966212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.253 [2024-07-25 19:16:45.966218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:56.253 [2024-07-25 19:16:45.966228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x188d00 00:25:56.253 [2024-07-25 19:16:45.966234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:56.253 [2024-07-25 19:16:45.966244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x188d00 00:25:56.253 [2024-07-25 19:16:45.966250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:56.253 [2024-07-25 19:16:45.966260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:120328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x188d00 00:25:56.253 [2024-07-25 19:16:45.966267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:56.253 [2024-07-25 19:16:45.966277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:120360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x188d00 00:25:56.253 [2024-07-25 19:16:45.966283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:56.253 [2024-07-25 19:16:45.966293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.253 [2024-07-25 19:16:45.966299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:56.253 [2024-07-25 19:16:45.966309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:120384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x188d00 00:25:56.253 [2024-07-25 19:16:45.966315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:56.253 [2024-07-25 19:16:45.966324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.253 [2024-07-25 19:16:45.966330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:56.253 [2024-07-25 19:16:45.966340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.253 [2024-07-25 19:16:45.966348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:56.253 [2024-07-25 19:16:45.966358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x188d00 00:25:56.253 [2024-07-25 19:16:45.966364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:56.253 [2024-07-25 19:16:45.966374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x188d00 00:25:56.253 [2024-07-25 19:16:45.966380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:56.253 [2024-07-25 19:16:45.966389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x188d00 00:25:56.253 [2024-07-25 19:16:45.966396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:56.253 [2024-07-25 19:16:45.966406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x188d00 00:25:56.253 [2024-07-25 19:16:45.966412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:56.253 [2024-07-25 19:16:45.966422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:120528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x188d00 00:25:56.253 [2024-07-25 19:16:45.966428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:56.253 [2024-07-25 19:16:45.966438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.253 [2024-07-25 19:16:45.966444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:56.253 [2024-07-25 19:16:45.966453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.253 [2024-07-25 19:16:45.966460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:56.253 [2024-07-25 19:16:45.966469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:120592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x188d00 00:25:56.253 [2024-07-25 19:16:45.966475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:56.253 [2024-07-25 19:16:45.966485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x188d00 00:25:56.253 [2024-07-25 19:16:45.966491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.253 [2024-07-25 19:16:45.966500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x188d00 00:25:56.253 [2024-07-25 19:16:45.966507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:56.253 [2024-07-25 19:16:45.966516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.253 [2024-07-25 19:16:45.966523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:56.253 [2024-07-25 19:16:45.966533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.253 [2024-07-25 19:16:45.966540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:56.253 [2024-07-25 19:16:45.966549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x188d00 00:25:56.253 [2024-07-25 19:16:45.966556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:56.253 [2024-07-25 19:16:45.966566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x188d00 00:25:56.254 [2024-07-25 19:16:45.966572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:56.254 [2024-07-25 19:16:45.966582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x188d00 00:25:56.254 [2024-07-25 19:16:45.966588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:56.254 [2024-07-25 19:16:45.966599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x188d00 00:25:56.254 [2024-07-25 19:16:45.966606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:56.254 [2024-07-25 19:16:45.966615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.254 [2024-07-25 19:16:45.966622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:56.254 [2024-07-25 19:16:45.966631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.254 [2024-07-25 19:16:45.966638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:56.254 [2024-07-25 19:16:45.966647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x188d00 00:25:56.254 [2024-07-25 19:16:45.966654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:56.254 [2024-07-25 19:16:45.966663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.254 [2024-07-25 19:16:45.966669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:56.254 [2024-07-25 19:16:45.966678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x188d00 00:25:56.254 [2024-07-25 19:16:45.966685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:56.254 [2024-07-25 19:16:45.966694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:120520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x188d00 00:25:56.254 [2024-07-25 19:16:45.966700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:56.254 [2024-07-25 19:16:45.966710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x188d00 00:25:56.254 [2024-07-25 19:16:45.966718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:56.254 [2024-07-25 19:16:45.966727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x188d00 00:25:56.254 [2024-07-25 19:16:45.966734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:56.254 [2024-07-25 19:16:45.966743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:120584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x188d00 00:25:56.254 [2024-07-25 19:16:45.966750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:56.254 [2024-07-25 19:16:45.966760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.254 [2024-07-25 19:16:45.966766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:56.254 [2024-07-25 19:16:45.966776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x188d00 00:25:56.254 [2024-07-25 19:16:45.966782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:56.254 [2024-07-25 19:16:45.966791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.254 [2024-07-25 19:16:45.966798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:56.254 [2024-07-25 19:16:45.966807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x188d00 00:25:56.254 [2024-07-25 19:16:45.966814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:56.254 [2024-07-25 19:16:45.966824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:120688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x188d00 00:25:56.254 [2024-07-25 19:16:45.966831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:56.254 Received shutdown signal, test time was about 28.000226 seconds 00:25:56.254 00:25:56.254 Latency(us) 00:25:56.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:56.254 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:56.254 Verification LBA range: start 0x0 length 0x4000 00:25:56.254 Nvme0n1 : 28.00 15458.27 60.38 0.00 0.00 8260.56 52.98 3019898.88 00:25:56.254 =================================================================================================================== 00:25:56.254 Total : 15458.27 60.38 0.00 0.00 8260.56 52.98 3019898.88 00:25:56.254 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:56.254 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:56.254 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:56.254 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:56.254 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:56.254 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:25:56.254 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:56.254 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:56.254 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:25:56.254 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:56.254 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:56.254 rmmod nvme_rdma 00:25:56.254 rmmod nvme_fabrics 00:25:56.513 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:56.513 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:25:56.513 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:25:56.513 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 878689 ']' 00:25:56.513 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 878689 00:25:56.513 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 878689 ']' 00:25:56.513 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 878689 00:25:56.513 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:25:56.513 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:56.513 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 878689 00:25:56.513 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:56.513 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:56.514 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 878689' 00:25:56.514 killing process with pid 878689 00:25:56.514 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 878689 00:25:56.514 19:16:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 878689 00:25:56.773 19:16:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:56.773 19:16:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:25:56.773 00:25:56.773 real 0m38.711s 00:25:56.773 user 1m53.540s 00:25:56.773 sys 0m7.769s 00:25:56.773 19:16:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:56.773 19:16:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:56.773 ************************************ 00:25:56.773 END TEST nvmf_host_multipath_status 00:25:56.773 ************************************ 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.774 ************************************ 00:25:56.774 START TEST nvmf_discovery_remove_ifc 00:25:56.774 ************************************ 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:25:56.774 * Looking for test storage... 00:25:56.774 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:25:56.774 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:25:56.774 00:25:56.774 real 0m0.124s 00:25:56.774 user 0m0.060s 00:25:56.774 sys 0m0.071s 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:56.774 19:16:49 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:56.774 ************************************ 00:25:56.774 END TEST nvmf_discovery_remove_ifc 00:25:56.774 ************************************ 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.034 ************************************ 00:25:57.034 START TEST nvmf_identify_kernel_target 00:25:57.034 ************************************ 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:25:57.034 * Looking for test storage... 00:25:57.034 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:25:57.034 19:16:49 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:03.605 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:03.605 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:26:03.605 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:03.605 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:03.605 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:03.605 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:03.605 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:03.605 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:26:03.605 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:26:03.606 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:26:03.606 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:26:03.606 Found net devices under 0000:af:00.0: mlx_0_0 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:26:03.606 Found net devices under 0000:af:00.1: mlx_0_1 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # rdma_device_init 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # uname 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:26:03.606 19:16:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:26:03.606 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:26:03.606 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:26:03.606 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:03.606 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:26:03.606 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:03.606 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:03.606 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:03.606 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:03.606 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:03.606 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:03.606 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:03.606 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:03.606 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:03.606 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:26:03.606 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:03.606 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:03.606 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:03.606 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:03.606 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:03.606 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:03.606 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:26:03.606 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:03.606 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:26:03.606 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:03.606 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:26:03.607 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:03.607 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:26:03.607 altname enp175s0f0np0 00:26:03.607 altname ens801f0np0 00:26:03.607 inet 192.168.100.8/24 scope global mlx_0_0 00:26:03.607 valid_lft forever preferred_lft forever 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:26:03.607 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:03.607 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:26:03.607 altname enp175s0f1np1 00:26:03.607 altname ens801f1np1 00:26:03.607 inet 192.168.100.9/24 scope global mlx_0_1 00:26:03.607 valid_lft forever preferred_lft forever 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:26:03.607 192.168.100.9' 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:26:03.607 192.168.100.9' 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # head -n 1 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:26:03.607 192.168.100.9' 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # tail -n +2 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # head -n 1 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:03.607 19:16:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:26:05.515 Waiting for block devices as requested 00:26:05.515 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:05.773 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:05.773 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:06.032 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:06.032 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:06.032 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:06.291 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:06.291 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:06.291 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:06.291 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:06.550 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:06.550 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:06.550 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:06.809 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:06.809 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:06.810 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:06.810 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:07.069 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:07.069 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:07.069 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:07.069 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:26:07.069 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:07.069 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:07.069 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:07.069 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:07.069 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:07.069 No valid GPT data, bailing 00:26:07.069 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:07.069 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:07.069 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:07.069 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:07.069 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:07.069 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:07.069 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:07.069 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:07.069 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:07.069 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:26:07.069 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:07.069 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:26:07.069 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:26:07.069 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo rdma 00:26:07.069 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:26:07.069 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:26:07.069 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:07.069 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:26:07.328 00:26:07.328 Discovery Log Number of Records 2, Generation counter 2 00:26:07.328 =====Discovery Log Entry 0====== 00:26:07.328 trtype: rdma 00:26:07.328 adrfam: ipv4 00:26:07.328 subtype: current discovery subsystem 00:26:07.328 treq: not specified, sq flow control disable supported 00:26:07.328 portid: 1 00:26:07.328 trsvcid: 4420 00:26:07.328 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:07.328 traddr: 192.168.100.8 00:26:07.328 eflags: none 00:26:07.328 rdma_prtype: not specified 00:26:07.328 rdma_qptype: connected 00:26:07.328 rdma_cms: rdma-cm 00:26:07.328 rdma_pkey: 0x0000 00:26:07.328 =====Discovery Log Entry 1====== 00:26:07.328 trtype: rdma 00:26:07.328 adrfam: ipv4 00:26:07.328 subtype: nvme subsystem 00:26:07.328 treq: not specified, sq flow control disable supported 00:26:07.328 portid: 1 00:26:07.328 trsvcid: 4420 00:26:07.328 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:07.328 traddr: 192.168.100.8 00:26:07.328 eflags: none 00:26:07.328 rdma_prtype: not specified 00:26:07.328 rdma_qptype: connected 00:26:07.328 rdma_cms: rdma-cm 00:26:07.328 rdma_pkey: 0x0000 00:26:07.328 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:26:07.329 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:07.329 EAL: No free 2048 kB hugepages reported on node 1 00:26:07.329 ===================================================== 00:26:07.329 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:07.329 ===================================================== 00:26:07.329 Controller Capabilities/Features 00:26:07.329 ================================ 00:26:07.329 Vendor ID: 0000 00:26:07.329 Subsystem Vendor ID: 0000 00:26:07.329 Serial Number: 2c7be9652f36783ea2c5 00:26:07.329 Model Number: Linux 00:26:07.329 Firmware Version: 6.8.9-20 00:26:07.329 Recommended Arb Burst: 0 00:26:07.329 IEEE OUI Identifier: 00 00 00 00:26:07.329 Multi-path I/O 00:26:07.329 May have multiple subsystem ports: No 00:26:07.329 May have multiple controllers: No 00:26:07.329 Associated with SR-IOV VF: No 00:26:07.329 Max Data Transfer Size: Unlimited 00:26:07.329 Max Number of Namespaces: 0 00:26:07.329 Max Number of I/O Queues: 1024 00:26:07.329 NVMe Specification Version (VS): 1.3 00:26:07.329 NVMe Specification Version (Identify): 1.3 00:26:07.329 Maximum Queue Entries: 128 00:26:07.329 Contiguous Queues Required: No 00:26:07.329 Arbitration Mechanisms Supported 00:26:07.329 Weighted Round Robin: Not Supported 00:26:07.329 Vendor Specific: Not Supported 00:26:07.329 Reset Timeout: 7500 ms 00:26:07.329 Doorbell Stride: 4 bytes 00:26:07.329 NVM Subsystem Reset: Not Supported 00:26:07.329 Command Sets Supported 00:26:07.329 NVM Command Set: Supported 00:26:07.329 Boot Partition: Not Supported 00:26:07.329 Memory Page Size Minimum: 4096 bytes 00:26:07.329 Memory Page Size Maximum: 4096 bytes 00:26:07.329 Persistent Memory Region: Not Supported 00:26:07.329 Optional Asynchronous Events Supported 00:26:07.329 Namespace Attribute Notices: Not Supported 00:26:07.329 Firmware Activation Notices: Not Supported 00:26:07.329 ANA Change Notices: Not Supported 00:26:07.329 PLE Aggregate Log Change Notices: Not Supported 00:26:07.329 LBA Status Info Alert Notices: Not Supported 00:26:07.329 EGE Aggregate Log Change Notices: Not Supported 00:26:07.329 Normal NVM Subsystem Shutdown event: Not Supported 00:26:07.329 Zone Descriptor Change Notices: Not Supported 00:26:07.329 Discovery Log Change Notices: Supported 00:26:07.329 Controller Attributes 00:26:07.329 128-bit Host Identifier: Not Supported 00:26:07.329 Non-Operational Permissive Mode: Not Supported 00:26:07.329 NVM Sets: Not Supported 00:26:07.329 Read Recovery Levels: Not Supported 00:26:07.329 Endurance Groups: Not Supported 00:26:07.329 Predictable Latency Mode: Not Supported 00:26:07.329 Traffic Based Keep ALive: Not Supported 00:26:07.329 Namespace Granularity: Not Supported 00:26:07.329 SQ Associations: Not Supported 00:26:07.329 UUID List: Not Supported 00:26:07.329 Multi-Domain Subsystem: Not Supported 00:26:07.329 Fixed Capacity Management: Not Supported 00:26:07.329 Variable Capacity Management: Not Supported 00:26:07.329 Delete Endurance Group: Not Supported 00:26:07.329 Delete NVM Set: Not Supported 00:26:07.329 Extended LBA Formats Supported: Not Supported 00:26:07.329 Flexible Data Placement Supported: Not Supported 00:26:07.329 00:26:07.329 Controller Memory Buffer Support 00:26:07.329 ================================ 00:26:07.329 Supported: No 00:26:07.329 00:26:07.329 Persistent Memory Region Support 00:26:07.329 ================================ 00:26:07.329 Supported: No 00:26:07.329 00:26:07.329 Admin Command Set Attributes 00:26:07.329 ============================ 00:26:07.329 Security Send/Receive: Not Supported 00:26:07.329 Format NVM: Not Supported 00:26:07.329 Firmware Activate/Download: Not Supported 00:26:07.329 Namespace Management: Not Supported 00:26:07.329 Device Self-Test: Not Supported 00:26:07.329 Directives: Not Supported 00:26:07.329 NVMe-MI: Not Supported 00:26:07.329 Virtualization Management: Not Supported 00:26:07.329 Doorbell Buffer Config: Not Supported 00:26:07.329 Get LBA Status Capability: Not Supported 00:26:07.329 Command & Feature Lockdown Capability: Not Supported 00:26:07.329 Abort Command Limit: 1 00:26:07.329 Async Event Request Limit: 1 00:26:07.329 Number of Firmware Slots: N/A 00:26:07.329 Firmware Slot 1 Read-Only: N/A 00:26:07.329 Firmware Activation Without Reset: N/A 00:26:07.329 Multiple Update Detection Support: N/A 00:26:07.329 Firmware Update Granularity: No Information Provided 00:26:07.329 Per-Namespace SMART Log: No 00:26:07.329 Asymmetric Namespace Access Log Page: Not Supported 00:26:07.329 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:07.329 Command Effects Log Page: Not Supported 00:26:07.329 Get Log Page Extended Data: Supported 00:26:07.329 Telemetry Log Pages: Not Supported 00:26:07.329 Persistent Event Log Pages: Not Supported 00:26:07.329 Supported Log Pages Log Page: May Support 00:26:07.329 Commands Supported & Effects Log Page: Not Supported 00:26:07.329 Feature Identifiers & Effects Log Page:May Support 00:26:07.329 NVMe-MI Commands & Effects Log Page: May Support 00:26:07.329 Data Area 4 for Telemetry Log: Not Supported 00:26:07.329 Error Log Page Entries Supported: 1 00:26:07.329 Keep Alive: Not Supported 00:26:07.329 00:26:07.329 NVM Command Set Attributes 00:26:07.329 ========================== 00:26:07.329 Submission Queue Entry Size 00:26:07.329 Max: 1 00:26:07.329 Min: 1 00:26:07.329 Completion Queue Entry Size 00:26:07.329 Max: 1 00:26:07.329 Min: 1 00:26:07.329 Number of Namespaces: 0 00:26:07.329 Compare Command: Not Supported 00:26:07.329 Write Uncorrectable Command: Not Supported 00:26:07.329 Dataset Management Command: Not Supported 00:26:07.329 Write Zeroes Command: Not Supported 00:26:07.329 Set Features Save Field: Not Supported 00:26:07.329 Reservations: Not Supported 00:26:07.329 Timestamp: Not Supported 00:26:07.329 Copy: Not Supported 00:26:07.329 Volatile Write Cache: Not Present 00:26:07.329 Atomic Write Unit (Normal): 1 00:26:07.329 Atomic Write Unit (PFail): 1 00:26:07.329 Atomic Compare & Write Unit: 1 00:26:07.329 Fused Compare & Write: Not Supported 00:26:07.329 Scatter-Gather List 00:26:07.329 SGL Command Set: Supported 00:26:07.329 SGL Keyed: Supported 00:26:07.329 SGL Bit Bucket Descriptor: Not Supported 00:26:07.329 SGL Metadata Pointer: Not Supported 00:26:07.329 Oversized SGL: Not Supported 00:26:07.329 SGL Metadata Address: Not Supported 00:26:07.329 SGL Offset: Supported 00:26:07.329 Transport SGL Data Block: Not Supported 00:26:07.329 Replay Protected Memory Block: Not Supported 00:26:07.329 00:26:07.329 Firmware Slot Information 00:26:07.329 ========================= 00:26:07.329 Active slot: 0 00:26:07.329 00:26:07.329 00:26:07.329 Error Log 00:26:07.329 ========= 00:26:07.329 00:26:07.329 Active Namespaces 00:26:07.329 ================= 00:26:07.329 Discovery Log Page 00:26:07.329 ================== 00:26:07.329 Generation Counter: 2 00:26:07.329 Number of Records: 2 00:26:07.329 Record Format: 0 00:26:07.329 00:26:07.329 Discovery Log Entry 0 00:26:07.329 ---------------------- 00:26:07.329 Transport Type: 1 (RDMA) 00:26:07.329 Address Family: 1 (IPv4) 00:26:07.329 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:07.329 Entry Flags: 00:26:07.329 Duplicate Returned Information: 0 00:26:07.329 Explicit Persistent Connection Support for Discovery: 0 00:26:07.329 Transport Requirements: 00:26:07.329 Secure Channel: Not Specified 00:26:07.329 Port ID: 1 (0x0001) 00:26:07.329 Controller ID: 65535 (0xffff) 00:26:07.329 Admin Max SQ Size: 32 00:26:07.329 Transport Service Identifier: 4420 00:26:07.329 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:07.329 Transport Address: 192.168.100.8 00:26:07.329 Transport Specific Address Subtype - RDMA 00:26:07.329 RDMA QP Service Type: 1 (Reliable Connected) 00:26:07.329 RDMA Provider Type: 1 (No provider specified) 00:26:07.329 RDMA CM Service: 1 (RDMA_CM) 00:26:07.329 Discovery Log Entry 1 00:26:07.329 ---------------------- 00:26:07.329 Transport Type: 1 (RDMA) 00:26:07.329 Address Family: 1 (IPv4) 00:26:07.329 Subsystem Type: 2 (NVM Subsystem) 00:26:07.329 Entry Flags: 00:26:07.329 Duplicate Returned Information: 0 00:26:07.329 Explicit Persistent Connection Support for Discovery: 0 00:26:07.329 Transport Requirements: 00:26:07.329 Secure Channel: Not Specified 00:26:07.329 Port ID: 1 (0x0001) 00:26:07.329 Controller ID: 65535 (0xffff) 00:26:07.329 Admin Max SQ Size: 32 00:26:07.329 Transport Service Identifier: 4420 00:26:07.330 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:07.330 Transport Address: 192.168.100.8 00:26:07.330 Transport Specific Address Subtype - RDMA 00:26:07.330 RDMA QP Service Type: 1 (Reliable Connected) 00:26:07.589 RDMA Provider Type: 1 (No provider specified) 00:26:07.589 RDMA CM Service: 1 (RDMA_CM) 00:26:07.589 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:07.589 EAL: No free 2048 kB hugepages reported on node 1 00:26:07.589 get_feature(0x01) failed 00:26:07.589 get_feature(0x02) failed 00:26:07.589 get_feature(0x04) failed 00:26:07.589 ===================================================== 00:26:07.589 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:26:07.589 ===================================================== 00:26:07.589 Controller Capabilities/Features 00:26:07.589 ================================ 00:26:07.589 Vendor ID: 0000 00:26:07.589 Subsystem Vendor ID: 0000 00:26:07.589 Serial Number: e9993fb5e2c638826724 00:26:07.589 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:07.589 Firmware Version: 6.8.9-20 00:26:07.589 Recommended Arb Burst: 6 00:26:07.589 IEEE OUI Identifier: 00 00 00 00:26:07.589 Multi-path I/O 00:26:07.589 May have multiple subsystem ports: Yes 00:26:07.589 May have multiple controllers: Yes 00:26:07.589 Associated with SR-IOV VF: No 00:26:07.589 Max Data Transfer Size: 1048576 00:26:07.589 Max Number of Namespaces: 1024 00:26:07.589 Max Number of I/O Queues: 128 00:26:07.589 NVMe Specification Version (VS): 1.3 00:26:07.589 NVMe Specification Version (Identify): 1.3 00:26:07.589 Maximum Queue Entries: 128 00:26:07.589 Contiguous Queues Required: No 00:26:07.589 Arbitration Mechanisms Supported 00:26:07.589 Weighted Round Robin: Not Supported 00:26:07.589 Vendor Specific: Not Supported 00:26:07.589 Reset Timeout: 7500 ms 00:26:07.589 Doorbell Stride: 4 bytes 00:26:07.589 NVM Subsystem Reset: Not Supported 00:26:07.589 Command Sets Supported 00:26:07.589 NVM Command Set: Supported 00:26:07.589 Boot Partition: Not Supported 00:26:07.589 Memory Page Size Minimum: 4096 bytes 00:26:07.589 Memory Page Size Maximum: 4096 bytes 00:26:07.589 Persistent Memory Region: Not Supported 00:26:07.589 Optional Asynchronous Events Supported 00:26:07.589 Namespace Attribute Notices: Supported 00:26:07.589 Firmware Activation Notices: Not Supported 00:26:07.589 ANA Change Notices: Supported 00:26:07.589 PLE Aggregate Log Change Notices: Not Supported 00:26:07.589 LBA Status Info Alert Notices: Not Supported 00:26:07.589 EGE Aggregate Log Change Notices: Not Supported 00:26:07.589 Normal NVM Subsystem Shutdown event: Not Supported 00:26:07.589 Zone Descriptor Change Notices: Not Supported 00:26:07.589 Discovery Log Change Notices: Not Supported 00:26:07.589 Controller Attributes 00:26:07.589 128-bit Host Identifier: Supported 00:26:07.589 Non-Operational Permissive Mode: Not Supported 00:26:07.589 NVM Sets: Not Supported 00:26:07.589 Read Recovery Levels: Not Supported 00:26:07.589 Endurance Groups: Not Supported 00:26:07.589 Predictable Latency Mode: Not Supported 00:26:07.589 Traffic Based Keep ALive: Supported 00:26:07.589 Namespace Granularity: Not Supported 00:26:07.589 SQ Associations: Not Supported 00:26:07.589 UUID List: Not Supported 00:26:07.589 Multi-Domain Subsystem: Not Supported 00:26:07.589 Fixed Capacity Management: Not Supported 00:26:07.589 Variable Capacity Management: Not Supported 00:26:07.589 Delete Endurance Group: Not Supported 00:26:07.589 Delete NVM Set: Not Supported 00:26:07.589 Extended LBA Formats Supported: Not Supported 00:26:07.589 Flexible Data Placement Supported: Not Supported 00:26:07.589 00:26:07.589 Controller Memory Buffer Support 00:26:07.589 ================================ 00:26:07.589 Supported: No 00:26:07.589 00:26:07.589 Persistent Memory Region Support 00:26:07.589 ================================ 00:26:07.589 Supported: No 00:26:07.589 00:26:07.589 Admin Command Set Attributes 00:26:07.589 ============================ 00:26:07.589 Security Send/Receive: Not Supported 00:26:07.589 Format NVM: Not Supported 00:26:07.589 Firmware Activate/Download: Not Supported 00:26:07.589 Namespace Management: Not Supported 00:26:07.589 Device Self-Test: Not Supported 00:26:07.589 Directives: Not Supported 00:26:07.589 NVMe-MI: Not Supported 00:26:07.589 Virtualization Management: Not Supported 00:26:07.589 Doorbell Buffer Config: Not Supported 00:26:07.589 Get LBA Status Capability: Not Supported 00:26:07.589 Command & Feature Lockdown Capability: Not Supported 00:26:07.589 Abort Command Limit: 4 00:26:07.589 Async Event Request Limit: 4 00:26:07.589 Number of Firmware Slots: N/A 00:26:07.589 Firmware Slot 1 Read-Only: N/A 00:26:07.589 Firmware Activation Without Reset: N/A 00:26:07.589 Multiple Update Detection Support: N/A 00:26:07.589 Firmware Update Granularity: No Information Provided 00:26:07.589 Per-Namespace SMART Log: Yes 00:26:07.589 Asymmetric Namespace Access Log Page: Supported 00:26:07.589 ANA Transition Time : 10 sec 00:26:07.589 00:26:07.589 Asymmetric Namespace Access Capabilities 00:26:07.589 ANA Optimized State : Supported 00:26:07.589 ANA Non-Optimized State : Supported 00:26:07.589 ANA Inaccessible State : Supported 00:26:07.589 ANA Persistent Loss State : Supported 00:26:07.589 ANA Change State : Supported 00:26:07.589 ANAGRPID is not changed : No 00:26:07.589 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:07.589 00:26:07.589 ANA Group Identifier Maximum : 128 00:26:07.589 Number of ANA Group Identifiers : 128 00:26:07.589 Max Number of Allowed Namespaces : 1024 00:26:07.589 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:07.589 Command Effects Log Page: Supported 00:26:07.589 Get Log Page Extended Data: Supported 00:26:07.589 Telemetry Log Pages: Not Supported 00:26:07.589 Persistent Event Log Pages: Not Supported 00:26:07.589 Supported Log Pages Log Page: May Support 00:26:07.589 Commands Supported & Effects Log Page: Not Supported 00:26:07.589 Feature Identifiers & Effects Log Page:May Support 00:26:07.589 NVMe-MI Commands & Effects Log Page: May Support 00:26:07.589 Data Area 4 for Telemetry Log: Not Supported 00:26:07.589 Error Log Page Entries Supported: 128 00:26:07.589 Keep Alive: Supported 00:26:07.589 Keep Alive Granularity: 1000 ms 00:26:07.589 00:26:07.589 NVM Command Set Attributes 00:26:07.589 ========================== 00:26:07.589 Submission Queue Entry Size 00:26:07.589 Max: 64 00:26:07.589 Min: 64 00:26:07.589 Completion Queue Entry Size 00:26:07.589 Max: 16 00:26:07.589 Min: 16 00:26:07.589 Number of Namespaces: 1024 00:26:07.589 Compare Command: Not Supported 00:26:07.589 Write Uncorrectable Command: Not Supported 00:26:07.589 Dataset Management Command: Supported 00:26:07.589 Write Zeroes Command: Supported 00:26:07.589 Set Features Save Field: Not Supported 00:26:07.589 Reservations: Not Supported 00:26:07.589 Timestamp: Not Supported 00:26:07.589 Copy: Not Supported 00:26:07.589 Volatile Write Cache: Present 00:26:07.589 Atomic Write Unit (Normal): 1 00:26:07.589 Atomic Write Unit (PFail): 1 00:26:07.589 Atomic Compare & Write Unit: 1 00:26:07.589 Fused Compare & Write: Not Supported 00:26:07.589 Scatter-Gather List 00:26:07.590 SGL Command Set: Supported 00:26:07.590 SGL Keyed: Supported 00:26:07.590 SGL Bit Bucket Descriptor: Not Supported 00:26:07.590 SGL Metadata Pointer: Not Supported 00:26:07.590 Oversized SGL: Not Supported 00:26:07.590 SGL Metadata Address: Not Supported 00:26:07.590 SGL Offset: Supported 00:26:07.590 Transport SGL Data Block: Not Supported 00:26:07.590 Replay Protected Memory Block: Not Supported 00:26:07.590 00:26:07.590 Firmware Slot Information 00:26:07.590 ========================= 00:26:07.590 Active slot: 0 00:26:07.590 00:26:07.590 Asymmetric Namespace Access 00:26:07.590 =========================== 00:26:07.590 Change Count : 0 00:26:07.590 Number of ANA Group Descriptors : 1 00:26:07.590 ANA Group Descriptor : 0 00:26:07.590 ANA Group ID : 1 00:26:07.590 Number of NSID Values : 1 00:26:07.590 Change Count : 0 00:26:07.590 ANA State : 1 00:26:07.590 Namespace Identifier : 1 00:26:07.590 00:26:07.590 Commands Supported and Effects 00:26:07.590 ============================== 00:26:07.590 Admin Commands 00:26:07.590 -------------- 00:26:07.590 Get Log Page (02h): Supported 00:26:07.590 Identify (06h): Supported 00:26:07.590 Abort (08h): Supported 00:26:07.590 Set Features (09h): Supported 00:26:07.590 Get Features (0Ah): Supported 00:26:07.590 Asynchronous Event Request (0Ch): Supported 00:26:07.590 Keep Alive (18h): Supported 00:26:07.590 I/O Commands 00:26:07.590 ------------ 00:26:07.590 Flush (00h): Supported 00:26:07.590 Write (01h): Supported LBA-Change 00:26:07.590 Read (02h): Supported 00:26:07.590 Write Zeroes (08h): Supported LBA-Change 00:26:07.590 Dataset Management (09h): Supported 00:26:07.590 00:26:07.590 Error Log 00:26:07.590 ========= 00:26:07.590 Entry: 0 00:26:07.590 Error Count: 0x3 00:26:07.590 Submission Queue Id: 0x0 00:26:07.590 Command Id: 0x5 00:26:07.590 Phase Bit: 0 00:26:07.590 Status Code: 0x2 00:26:07.590 Status Code Type: 0x0 00:26:07.590 Do Not Retry: 1 00:26:07.590 Error Location: 0x28 00:26:07.590 LBA: 0x0 00:26:07.590 Namespace: 0x0 00:26:07.590 Vendor Log Page: 0x0 00:26:07.590 ----------- 00:26:07.590 Entry: 1 00:26:07.590 Error Count: 0x2 00:26:07.590 Submission Queue Id: 0x0 00:26:07.590 Command Id: 0x5 00:26:07.590 Phase Bit: 0 00:26:07.590 Status Code: 0x2 00:26:07.590 Status Code Type: 0x0 00:26:07.590 Do Not Retry: 1 00:26:07.590 Error Location: 0x28 00:26:07.590 LBA: 0x0 00:26:07.590 Namespace: 0x0 00:26:07.590 Vendor Log Page: 0x0 00:26:07.590 ----------- 00:26:07.590 Entry: 2 00:26:07.590 Error Count: 0x1 00:26:07.590 Submission Queue Id: 0x0 00:26:07.590 Command Id: 0x0 00:26:07.590 Phase Bit: 0 00:26:07.590 Status Code: 0x2 00:26:07.590 Status Code Type: 0x0 00:26:07.590 Do Not Retry: 1 00:26:07.590 Error Location: 0x28 00:26:07.590 LBA: 0x0 00:26:07.590 Namespace: 0x0 00:26:07.590 Vendor Log Page: 0x0 00:26:07.590 00:26:07.590 Number of Queues 00:26:07.590 ================ 00:26:07.590 Number of I/O Submission Queues: 128 00:26:07.590 Number of I/O Completion Queues: 128 00:26:07.590 00:26:07.590 ZNS Specific Controller Data 00:26:07.590 ============================ 00:26:07.590 Zone Append Size Limit: 0 00:26:07.590 00:26:07.590 00:26:07.590 Active Namespaces 00:26:07.590 ================= 00:26:07.590 get_feature(0x05) failed 00:26:07.590 Namespace ID:1 00:26:07.590 Command Set Identifier: NVM (00h) 00:26:07.590 Deallocate: Supported 00:26:07.590 Deallocated/Unwritten Error: Not Supported 00:26:07.590 Deallocated Read Value: Unknown 00:26:07.590 Deallocate in Write Zeroes: Not Supported 00:26:07.590 Deallocated Guard Field: 0xFFFF 00:26:07.590 Flush: Supported 00:26:07.590 Reservation: Not Supported 00:26:07.590 Namespace Sharing Capabilities: Multiple Controllers 00:26:07.590 Size (in LBAs): 1953525168 (931GiB) 00:26:07.590 Capacity (in LBAs): 1953525168 (931GiB) 00:26:07.590 Utilization (in LBAs): 1953525168 (931GiB) 00:26:07.590 UUID: 7245d702-90a5-4622-9e9d-395e1cf473cc 00:26:07.590 Thin Provisioning: Not Supported 00:26:07.590 Per-NS Atomic Units: Yes 00:26:07.590 Atomic Boundary Size (Normal): 0 00:26:07.590 Atomic Boundary Size (PFail): 0 00:26:07.590 Atomic Boundary Offset: 0 00:26:07.590 NGUID/EUI64 Never Reused: No 00:26:07.590 ANA group ID: 1 00:26:07.590 Namespace Write Protected: No 00:26:07.590 Number of LBA Formats: 1 00:26:07.590 Current LBA Format: LBA Format #00 00:26:07.590 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:07.590 00:26:07.590 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:07.590 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:07.590 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:26:07.590 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:26:07.590 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:26:07.590 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:26:07.590 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:07.590 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:26:07.590 rmmod nvme_rdma 00:26:07.590 rmmod nvme_fabrics 00:26:07.590 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:07.590 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:26:07.590 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:26:07.590 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:26:07.590 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:07.590 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:26:07.590 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:07.590 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:07.590 19:16:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:26:07.590 19:17:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:07.590 19:17:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:07.590 19:17:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:07.590 19:17:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:07.590 19:17:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:07.590 19:17:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:26:07.849 19:17:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:26:10.386 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:10.386 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:10.645 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:10.645 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:10.645 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:10.645 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:10.645 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:10.645 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:10.645 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:10.645 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:10.645 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:10.645 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:10.645 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:10.645 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:10.645 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:10.645 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:11.583 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:11.583 00:26:11.583 real 0m14.651s 00:26:11.583 user 0m4.405s 00:26:11.583 sys 0m8.641s 00:26:11.583 19:17:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:11.583 19:17:03 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:11.583 ************************************ 00:26:11.583 END TEST nvmf_identify_kernel_target 00:26:11.583 ************************************ 00:26:11.583 19:17:03 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:26:11.583 19:17:03 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:11.583 19:17:03 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:11.583 19:17:03 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.583 ************************************ 00:26:11.583 START TEST nvmf_auth_host 00:26:11.583 ************************************ 00:26:11.583 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:26:11.842 * Looking for test storage... 00:26:11.842 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:11.842 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:11.843 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:26:11.843 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:11.843 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:11.843 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:11.843 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:11.843 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.843 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:11.843 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.843 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:11.843 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:11.843 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:26:11.843 19:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.406 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:18.406 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:26:18.406 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:18.406 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:18.406 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:18.406 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:18.406 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:18.406 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:26:18.406 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:18.406 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:26:18.406 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:26:18.406 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:26:18.406 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:26:18.407 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:26:18.407 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:26:18.407 Found net devices under 0000:af:00.0: mlx_0_0 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:26:18.407 Found net devices under 0000:af:00.1: mlx_0_1 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # rdma_device_init 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # uname 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:26:18.407 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:18.407 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:26:18.407 altname enp175s0f0np0 00:26:18.407 altname ens801f0np0 00:26:18.407 inet 192.168.100.8/24 scope global mlx_0_0 00:26:18.407 valid_lft forever preferred_lft forever 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:26:18.407 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:18.407 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:26:18.407 altname enp175s0f1np1 00:26:18.407 altname ens801f1np1 00:26:18.407 inet 192.168.100.9/24 scope global mlx_0_1 00:26:18.407 valid_lft forever preferred_lft forever 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:18.407 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:26:18.408 192.168.100.9' 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:26:18.408 192.168.100.9' 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # head -n 1 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:26:18.408 192.168.100.9' 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # tail -n +2 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # head -n 1 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=893449 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 893449 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 893449 ']' 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:18.408 19:17:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.408 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:18.408 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:26:18.408 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:18.408 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:18.408 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.408 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:18.408 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:18.408 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:18.408 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:18.408 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:18.408 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:18.408 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:18.408 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:18.408 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:18.408 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=17f8827fa85c1b8929fa1f791158f408 00:26:18.408 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:18.408 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.x8A 00:26:18.408 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 17f8827fa85c1b8929fa1f791158f408 0 00:26:18.408 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 17f8827fa85c1b8929fa1f791158f408 0 00:26:18.408 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:18.408 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:18.408 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=17f8827fa85c1b8929fa1f791158f408 00:26:18.408 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:18.408 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.x8A 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.x8A 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.x8A 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2a8c390cf66a9f9ac738313ad5396ce7111ff693bc11b08fbbc8f3021474c454 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.DdB 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2a8c390cf66a9f9ac738313ad5396ce7111ff693bc11b08fbbc8f3021474c454 3 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2a8c390cf66a9f9ac738313ad5396ce7111ff693bc11b08fbbc8f3021474c454 3 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2a8c390cf66a9f9ac738313ad5396ce7111ff693bc11b08fbbc8f3021474c454 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.DdB 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.DdB 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.DdB 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ee36fdf957bc42aee0095a43ec0ea088f562f5c5e67e6622 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.20E 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ee36fdf957bc42aee0095a43ec0ea088f562f5c5e67e6622 0 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ee36fdf957bc42aee0095a43ec0ea088f562f5c5e67e6622 0 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ee36fdf957bc42aee0095a43ec0ea088f562f5c5e67e6622 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.20E 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.20E 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.20E 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:18.667 19:17:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:18.667 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:18.667 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=85e428c2945f45c70ab2e8745b0080f04650fa938e7d05d8 00:26:18.667 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:18.667 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.HPp 00:26:18.667 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 85e428c2945f45c70ab2e8745b0080f04650fa938e7d05d8 2 00:26:18.667 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 85e428c2945f45c70ab2e8745b0080f04650fa938e7d05d8 2 00:26:18.667 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:18.667 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:18.667 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=85e428c2945f45c70ab2e8745b0080f04650fa938e7d05d8 00:26:18.667 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:18.667 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:18.667 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.HPp 00:26:18.667 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.HPp 00:26:18.667 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.HPp 00:26:18.667 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:18.667 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:18.667 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:18.667 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:18.667 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:18.667 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:18.667 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:18.667 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cbe54050a7896e2db1c697850f2c0eb9 00:26:18.667 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:18.667 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Pq4 00:26:18.667 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cbe54050a7896e2db1c697850f2c0eb9 1 00:26:18.667 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cbe54050a7896e2db1c697850f2c0eb9 1 00:26:18.667 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:18.667 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:18.667 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cbe54050a7896e2db1c697850f2c0eb9 00:26:18.667 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:18.667 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:18.668 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Pq4 00:26:18.668 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Pq4 00:26:18.668 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Pq4 00:26:18.668 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:18.668 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:18.668 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:18.668 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:18.668 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:18.668 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:18.668 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:18.668 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ef8edd3e4f2bea3f6dafa2963517b22b 00:26:18.668 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:18.668 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.xrc 00:26:18.668 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ef8edd3e4f2bea3f6dafa2963517b22b 1 00:26:18.668 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ef8edd3e4f2bea3f6dafa2963517b22b 1 00:26:18.668 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:18.668 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:18.668 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ef8edd3e4f2bea3f6dafa2963517b22b 00:26:18.668 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:18.668 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.xrc 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.xrc 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.xrc 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8c1a6cb9634aaf624e05827c46b625b0812961d52c900aca 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.MQu 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8c1a6cb9634aaf624e05827c46b625b0812961d52c900aca 2 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8c1a6cb9634aaf624e05827c46b625b0812961d52c900aca 2 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8c1a6cb9634aaf624e05827c46b625b0812961d52c900aca 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.MQu 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.MQu 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.MQu 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7b8609ffa4374025c8f1da127ed1c620 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.rMa 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7b8609ffa4374025c8f1da127ed1c620 0 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7b8609ffa4374025c8f1da127ed1c620 0 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7b8609ffa4374025c8f1da127ed1c620 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.rMa 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.rMa 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.rMa 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=66b874a302748b302f8300c4bdf0ae8b5f016382f0486cc3f20b43116890e39c 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.YBX 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 66b874a302748b302f8300c4bdf0ae8b5f016382f0486cc3f20b43116890e39c 3 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 66b874a302748b302f8300c4bdf0ae8b5f016382f0486cc3f20b43116890e39c 3 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=66b874a302748b302f8300c4bdf0ae8b5f016382f0486cc3f20b43116890e39c 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.YBX 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.YBX 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.YBX 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 893449 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 893449 ']' 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:18.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:18.927 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.x8A 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.DdB ]] 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DdB 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.20E 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.HPp ]] 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.HPp 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Pq4 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.xrc ]] 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.xrc 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.MQu 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.rMa ]] 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.rMa 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.YBX 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:19.188 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:19.448 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:19.448 19:17:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:26:21.978 Waiting for block devices as requested 00:26:21.978 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:22.236 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:22.236 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:22.236 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:22.236 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:22.494 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:22.494 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:22.494 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:22.494 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:22.752 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:22.753 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:22.753 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:23.010 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:23.010 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:23.010 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:23.010 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:23.269 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:23.836 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:23.836 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:23.836 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:23.836 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:26:23.836 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:23.836 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:23.836 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:23.836 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:23.836 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:23.836 No valid GPT data, bailing 00:26:23.836 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:23.836 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:23.836 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:23.836 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:23.836 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:23.836 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:23.836 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:23.836 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:23.836 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:23.836 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:26:23.836 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:23.836 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:26:23.836 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:26:23.836 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo rdma 00:26:23.836 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:26:23.836 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:26:23.836 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:23.836 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 --hostid=80bdebd3-4c74-ea11-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:26:24.095 00:26:24.096 Discovery Log Number of Records 2, Generation counter 2 00:26:24.096 =====Discovery Log Entry 0====== 00:26:24.096 trtype: rdma 00:26:24.096 adrfam: ipv4 00:26:24.096 subtype: current discovery subsystem 00:26:24.096 treq: not specified, sq flow control disable supported 00:26:24.096 portid: 1 00:26:24.096 trsvcid: 4420 00:26:24.096 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:24.096 traddr: 192.168.100.8 00:26:24.096 eflags: none 00:26:24.096 rdma_prtype: not specified 00:26:24.096 rdma_qptype: connected 00:26:24.096 rdma_cms: rdma-cm 00:26:24.096 rdma_pkey: 0x0000 00:26:24.096 =====Discovery Log Entry 1====== 00:26:24.096 trtype: rdma 00:26:24.096 adrfam: ipv4 00:26:24.096 subtype: nvme subsystem 00:26:24.096 treq: not specified, sq flow control disable supported 00:26:24.096 portid: 1 00:26:24.096 trsvcid: 4420 00:26:24.096 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:24.096 traddr: 192.168.100.8 00:26:24.096 eflags: none 00:26:24.096 rdma_prtype: not specified 00:26:24.096 rdma_qptype: connected 00:26:24.096 rdma_cms: rdma-cm 00:26:24.096 rdma_pkey: 0x0000 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzNmZkZjk1N2JjNDJhZWUwMDk1YTQzZWMwZWEwODhmNTYyZjVjNWU2N2U2NjIyIueGbg==: 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzNmZkZjk1N2JjNDJhZWUwMDk1YTQzZWMwZWEwODhmNTYyZjVjNWU2N2U2NjIyIueGbg==: 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: ]] 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.096 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.355 nvme0n1 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTdmODgyN2ZhODVjMWI4OTI5ZmExZjc5MTE1OGY0MDgrf3yK: 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTdmODgyN2ZhODVjMWI4OTI5ZmExZjc5MTE1OGY0MDgrf3yK: 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: ]] 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.355 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.614 nvme0n1 00:26:24.614 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.614 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.614 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.614 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.614 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.614 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.614 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.614 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.614 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.614 19:17:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.614 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.614 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.614 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:24.614 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.614 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:24.614 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:24.614 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:24.614 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzNmZkZjk1N2JjNDJhZWUwMDk1YTQzZWMwZWEwODhmNTYyZjVjNWU2N2U2NjIyIueGbg==: 00:26:24.614 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: 00:26:24.614 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:24.614 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:24.614 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzNmZkZjk1N2JjNDJhZWUwMDk1YTQzZWMwZWEwODhmNTYyZjVjNWU2N2U2NjIyIueGbg==: 00:26:24.614 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: ]] 00:26:24.614 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: 00:26:24.614 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:24.614 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.614 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:24.614 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:24.614 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:24.614 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.614 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:24.614 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.614 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.614 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.614 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.614 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.614 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.614 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.614 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.614 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.614 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:24.615 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:24.615 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:24.615 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:24.615 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:24.615 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:24.615 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.615 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.874 nvme0n1 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2JlNTQwNTBhNzg5NmUyZGIxYzY5Nzg1MGYyYzBlYjnx3O7b: 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2JlNTQwNTBhNzg5NmUyZGIxYzY5Nzg1MGYyYzBlYjnx3O7b: 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: ]] 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.874 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.133 nvme0n1 00:26:25.134 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.134 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.134 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.134 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.134 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.134 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.134 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.134 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.134 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.134 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGMxYTZjYjk2MzRhYWY2MjRlMDU4MjdjNDZiNjI1YjA4MTI5NjFkNTJjOTAwYWNh9CxZfA==: 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGMxYTZjYjk2MzRhYWY2MjRlMDU4MjdjNDZiNjI1YjA4MTI5NjFkNTJjOTAwYWNh9CxZfA==: 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: ]] 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.393 nvme0n1 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.393 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjZiODc0YTMwMjc0OGIzMDJmODMwMGM0YmRmMGFlOGI1ZjAxNjM4MmYwNDg2Y2MzZjIwYjQzMTE2ODkwZTM5YzuWhoI=: 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjZiODc0YTMwMjc0OGIzMDJmODMwMGM0YmRmMGFlOGI1ZjAxNjM4MmYwNDg2Y2MzZjIwYjQzMTE2ODkwZTM5YzuWhoI=: 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.653 19:17:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.912 nvme0n1 00:26:25.912 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.912 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.912 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.912 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.912 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.912 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.912 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.912 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.912 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.912 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.912 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.912 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:25.912 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.912 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:25.912 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.912 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:25.913 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:25.913 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:25.913 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTdmODgyN2ZhODVjMWI4OTI5ZmExZjc5MTE1OGY0MDgrf3yK: 00:26:25.913 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: 00:26:25.913 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:25.913 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:25.913 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTdmODgyN2ZhODVjMWI4OTI5ZmExZjc5MTE1OGY0MDgrf3yK: 00:26:25.913 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: ]] 00:26:25.913 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: 00:26:25.913 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:25.913 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.913 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:25.913 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:25.913 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:25.913 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.913 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:25.913 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.913 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.913 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.913 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.913 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.913 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.913 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.913 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.913 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.913 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:25.913 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:25.913 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:25.913 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:25.913 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:25.913 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:25.913 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.913 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.172 nvme0n1 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzNmZkZjk1N2JjNDJhZWUwMDk1YTQzZWMwZWEwODhmNTYyZjVjNWU2N2U2NjIyIueGbg==: 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzNmZkZjk1N2JjNDJhZWUwMDk1YTQzZWMwZWEwODhmNTYyZjVjNWU2N2U2NjIyIueGbg==: 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: ]] 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:26.172 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.173 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.432 nvme0n1 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2JlNTQwNTBhNzg5NmUyZGIxYzY5Nzg1MGYyYzBlYjnx3O7b: 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2JlNTQwNTBhNzg5NmUyZGIxYzY5Nzg1MGYyYzBlYjnx3O7b: 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: ]] 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.432 19:17:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.691 nvme0n1 00:26:26.691 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.691 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.691 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.691 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.691 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.691 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.691 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.691 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.691 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.691 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGMxYTZjYjk2MzRhYWY2MjRlMDU4MjdjNDZiNjI1YjA4MTI5NjFkNTJjOTAwYWNh9CxZfA==: 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGMxYTZjYjk2MzRhYWY2MjRlMDU4MjdjNDZiNjI1YjA4MTI5NjFkNTJjOTAwYWNh9CxZfA==: 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: ]] 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.951 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.210 nvme0n1 00:26:27.210 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.210 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.210 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.210 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.210 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.210 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.210 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.210 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.210 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.210 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.210 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.210 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.210 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:27.210 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.210 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:27.210 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:27.211 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:27.211 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjZiODc0YTMwMjc0OGIzMDJmODMwMGM0YmRmMGFlOGI1ZjAxNjM4MmYwNDg2Y2MzZjIwYjQzMTE2ODkwZTM5YzuWhoI=: 00:26:27.211 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:27.211 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:27.211 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:27.211 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjZiODc0YTMwMjc0OGIzMDJmODMwMGM0YmRmMGFlOGI1ZjAxNjM4MmYwNDg2Y2MzZjIwYjQzMTE2ODkwZTM5YzuWhoI=: 00:26:27.211 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:27.211 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:27.211 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.211 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:27.211 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:27.211 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:27.211 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.211 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:27.211 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.211 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.211 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.211 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.211 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.211 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.211 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.211 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.211 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.211 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:27.211 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:27.211 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:27.211 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:27.211 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:27.211 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:27.211 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.211 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.470 nvme0n1 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTdmODgyN2ZhODVjMWI4OTI5ZmExZjc5MTE1OGY0MDgrf3yK: 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTdmODgyN2ZhODVjMWI4OTI5ZmExZjc5MTE1OGY0MDgrf3yK: 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: ]] 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.470 19:17:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.729 nvme0n1 00:26:27.729 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.729 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.729 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.729 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.729 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzNmZkZjk1N2JjNDJhZWUwMDk1YTQzZWMwZWEwODhmNTYyZjVjNWU2N2U2NjIyIueGbg==: 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzNmZkZjk1N2JjNDJhZWUwMDk1YTQzZWMwZWEwODhmNTYyZjVjNWU2N2U2NjIyIueGbg==: 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: ]] 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.988 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.247 nvme0n1 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2JlNTQwNTBhNzg5NmUyZGIxYzY5Nzg1MGYyYzBlYjnx3O7b: 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2JlNTQwNTBhNzg5NmUyZGIxYzY5Nzg1MGYyYzBlYjnx3O7b: 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: ]] 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.247 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.815 nvme0n1 00:26:28.815 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.815 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.815 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.815 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.815 19:17:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGMxYTZjYjk2MzRhYWY2MjRlMDU4MjdjNDZiNjI1YjA4MTI5NjFkNTJjOTAwYWNh9CxZfA==: 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGMxYTZjYjk2MzRhYWY2MjRlMDU4MjdjNDZiNjI1YjA4MTI5NjFkNTJjOTAwYWNh9CxZfA==: 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: ]] 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:28.815 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:28.816 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:28.816 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.816 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.075 nvme0n1 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjZiODc0YTMwMjc0OGIzMDJmODMwMGM0YmRmMGFlOGI1ZjAxNjM4MmYwNDg2Y2MzZjIwYjQzMTE2ODkwZTM5YzuWhoI=: 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjZiODc0YTMwMjc0OGIzMDJmODMwMGM0YmRmMGFlOGI1ZjAxNjM4MmYwNDg2Y2MzZjIwYjQzMTE2ODkwZTM5YzuWhoI=: 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.075 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.334 nvme0n1 00:26:29.334 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.334 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.334 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.334 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.334 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.334 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTdmODgyN2ZhODVjMWI4OTI5ZmExZjc5MTE1OGY0MDgrf3yK: 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTdmODgyN2ZhODVjMWI4OTI5ZmExZjc5MTE1OGY0MDgrf3yK: 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: ]] 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.593 19:17:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.851 nvme0n1 00:26:29.851 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.851 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.851 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.851 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.851 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.110 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.110 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.110 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.110 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.110 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.110 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.110 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.110 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:30.110 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.110 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:30.110 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:30.110 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:30.110 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzNmZkZjk1N2JjNDJhZWUwMDk1YTQzZWMwZWEwODhmNTYyZjVjNWU2N2U2NjIyIueGbg==: 00:26:30.110 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: 00:26:30.110 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:30.110 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:30.110 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzNmZkZjk1N2JjNDJhZWUwMDk1YTQzZWMwZWEwODhmNTYyZjVjNWU2N2U2NjIyIueGbg==: 00:26:30.110 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: ]] 00:26:30.110 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: 00:26:30.110 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:30.110 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.110 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:30.110 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:30.110 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:30.110 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.111 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:30.111 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.111 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.111 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.111 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.111 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.111 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.111 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.111 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.111 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.111 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:30.111 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:30.111 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:30.111 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:30.111 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:30.111 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:30.111 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.111 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.678 nvme0n1 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2JlNTQwNTBhNzg5NmUyZGIxYzY5Nzg1MGYyYzBlYjnx3O7b: 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2JlNTQwNTBhNzg5NmUyZGIxYzY5Nzg1MGYyYzBlYjnx3O7b: 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: ]] 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.678 19:17:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.936 nvme0n1 00:26:30.936 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.936 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.936 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.936 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.936 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.936 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGMxYTZjYjk2MzRhYWY2MjRlMDU4MjdjNDZiNjI1YjA4MTI5NjFkNTJjOTAwYWNh9CxZfA==: 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGMxYTZjYjk2MzRhYWY2MjRlMDU4MjdjNDZiNjI1YjA4MTI5NjFkNTJjOTAwYWNh9CxZfA==: 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: ]] 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.194 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.452 nvme0n1 00:26:31.452 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.452 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.452 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.452 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.452 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.452 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjZiODc0YTMwMjc0OGIzMDJmODMwMGM0YmRmMGFlOGI1ZjAxNjM4MmYwNDg2Y2MzZjIwYjQzMTE2ODkwZTM5YzuWhoI=: 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjZiODc0YTMwMjc0OGIzMDJmODMwMGM0YmRmMGFlOGI1ZjAxNjM4MmYwNDg2Y2MzZjIwYjQzMTE2ODkwZTM5YzuWhoI=: 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.711 19:17:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.970 nvme0n1 00:26:31.970 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.970 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.970 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.970 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.970 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.970 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.229 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.229 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.229 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.229 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.229 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.229 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:32.229 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.229 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:32.229 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.229 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:32.229 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:32.229 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:32.229 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTdmODgyN2ZhODVjMWI4OTI5ZmExZjc5MTE1OGY0MDgrf3yK: 00:26:32.229 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: 00:26:32.229 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:32.229 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:32.229 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTdmODgyN2ZhODVjMWI4OTI5ZmExZjc5MTE1OGY0MDgrf3yK: 00:26:32.229 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: ]] 00:26:32.229 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: 00:26:32.229 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:32.229 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.229 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:32.229 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:32.229 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:32.229 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.229 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:32.230 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.230 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.230 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.230 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.230 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.230 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.230 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.230 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.230 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.230 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:32.230 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:32.230 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:32.230 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:32.230 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:32.230 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:32.230 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.230 19:17:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.798 nvme0n1 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzNmZkZjk1N2JjNDJhZWUwMDk1YTQzZWMwZWEwODhmNTYyZjVjNWU2N2U2NjIyIueGbg==: 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzNmZkZjk1N2JjNDJhZWUwMDk1YTQzZWMwZWEwODhmNTYyZjVjNWU2N2U2NjIyIueGbg==: 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: ]] 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.798 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.735 nvme0n1 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2JlNTQwNTBhNzg5NmUyZGIxYzY5Nzg1MGYyYzBlYjnx3O7b: 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2JlNTQwNTBhNzg5NmUyZGIxYzY5Nzg1MGYyYzBlYjnx3O7b: 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: ]] 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.735 19:17:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.303 nvme0n1 00:26:34.303 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.303 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGMxYTZjYjk2MzRhYWY2MjRlMDU4MjdjNDZiNjI1YjA4MTI5NjFkNTJjOTAwYWNh9CxZfA==: 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGMxYTZjYjk2MzRhYWY2MjRlMDU4MjdjNDZiNjI1YjA4MTI5NjFkNTJjOTAwYWNh9CxZfA==: 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: ]] 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.304 19:17:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.872 nvme0n1 00:26:34.872 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.872 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.872 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.872 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.872 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.872 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjZiODc0YTMwMjc0OGIzMDJmODMwMGM0YmRmMGFlOGI1ZjAxNjM4MmYwNDg2Y2MzZjIwYjQzMTE2ODkwZTM5YzuWhoI=: 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjZiODc0YTMwMjc0OGIzMDJmODMwMGM0YmRmMGFlOGI1ZjAxNjM4MmYwNDg2Y2MzZjIwYjQzMTE2ODkwZTM5YzuWhoI=: 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.131 19:17:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.699 nvme0n1 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTdmODgyN2ZhODVjMWI4OTI5ZmExZjc5MTE1OGY0MDgrf3yK: 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTdmODgyN2ZhODVjMWI4OTI5ZmExZjc5MTE1OGY0MDgrf3yK: 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: ]] 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:35.699 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:35.700 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:35.700 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.700 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.700 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:35.700 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:35.700 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:35.700 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:35.700 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:35.700 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:35.700 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.700 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.959 nvme0n1 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzNmZkZjk1N2JjNDJhZWUwMDk1YTQzZWMwZWEwODhmNTYyZjVjNWU2N2U2NjIyIueGbg==: 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzNmZkZjk1N2JjNDJhZWUwMDk1YTQzZWMwZWEwODhmNTYyZjVjNWU2N2U2NjIyIueGbg==: 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: ]] 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.959 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.219 nvme0n1 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2JlNTQwNTBhNzg5NmUyZGIxYzY5Nzg1MGYyYzBlYjnx3O7b: 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2JlNTQwNTBhNzg5NmUyZGIxYzY5Nzg1MGYyYzBlYjnx3O7b: 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: ]] 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:36.219 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:36.478 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:36.478 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.478 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.478 nvme0n1 00:26:36.478 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.478 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.478 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.478 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.478 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.478 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.478 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.478 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.478 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.478 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGMxYTZjYjk2MzRhYWY2MjRlMDU4MjdjNDZiNjI1YjA4MTI5NjFkNTJjOTAwYWNh9CxZfA==: 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGMxYTZjYjk2MzRhYWY2MjRlMDU4MjdjNDZiNjI1YjA4MTI5NjFkNTJjOTAwYWNh9CxZfA==: 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: ]] 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.737 19:17:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.737 nvme0n1 00:26:36.737 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.738 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.738 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.738 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.738 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.738 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjZiODc0YTMwMjc0OGIzMDJmODMwMGM0YmRmMGFlOGI1ZjAxNjM4MmYwNDg2Y2MzZjIwYjQzMTE2ODkwZTM5YzuWhoI=: 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjZiODc0YTMwMjc0OGIzMDJmODMwMGM0YmRmMGFlOGI1ZjAxNjM4MmYwNDg2Y2MzZjIwYjQzMTE2ODkwZTM5YzuWhoI=: 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.997 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.256 nvme0n1 00:26:37.256 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.256 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.256 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.256 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.256 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.256 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.256 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.256 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.256 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.256 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.256 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.256 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTdmODgyN2ZhODVjMWI4OTI5ZmExZjc5MTE1OGY0MDgrf3yK: 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTdmODgyN2ZhODVjMWI4OTI5ZmExZjc5MTE1OGY0MDgrf3yK: 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: ]] 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.257 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.516 nvme0n1 00:26:37.516 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.516 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.516 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.516 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.516 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.516 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.516 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.516 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.516 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.516 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.516 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.516 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.516 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:37.516 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.516 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:37.516 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:37.516 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:37.516 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzNmZkZjk1N2JjNDJhZWUwMDk1YTQzZWMwZWEwODhmNTYyZjVjNWU2N2U2NjIyIueGbg==: 00:26:37.516 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: 00:26:37.516 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:37.516 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:37.517 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzNmZkZjk1N2JjNDJhZWUwMDk1YTQzZWMwZWEwODhmNTYyZjVjNWU2N2U2NjIyIueGbg==: 00:26:37.517 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: ]] 00:26:37.517 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: 00:26:37.517 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:37.517 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.517 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:37.517 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:37.517 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:37.517 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.517 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:37.517 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.517 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.517 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.517 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.517 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:37.517 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:37.517 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:37.517 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.517 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.517 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:37.517 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:37.517 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:37.517 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:37.517 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:37.517 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:37.517 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.517 19:17:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.777 nvme0n1 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2JlNTQwNTBhNzg5NmUyZGIxYzY5Nzg1MGYyYzBlYjnx3O7b: 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2JlNTQwNTBhNzg5NmUyZGIxYzY5Nzg1MGYyYzBlYjnx3O7b: 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: ]] 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.777 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.036 nvme0n1 00:26:38.036 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.036 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.036 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.036 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.036 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.036 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.036 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.036 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.036 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.036 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGMxYTZjYjk2MzRhYWY2MjRlMDU4MjdjNDZiNjI1YjA4MTI5NjFkNTJjOTAwYWNh9CxZfA==: 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGMxYTZjYjk2MzRhYWY2MjRlMDU4MjdjNDZiNjI1YjA4MTI5NjFkNTJjOTAwYWNh9CxZfA==: 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: ]] 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.299 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.559 nvme0n1 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjZiODc0YTMwMjc0OGIzMDJmODMwMGM0YmRmMGFlOGI1ZjAxNjM4MmYwNDg2Y2MzZjIwYjQzMTE2ODkwZTM5YzuWhoI=: 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjZiODc0YTMwMjc0OGIzMDJmODMwMGM0YmRmMGFlOGI1ZjAxNjM4MmYwNDg2Y2MzZjIwYjQzMTE2ODkwZTM5YzuWhoI=: 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.559 19:17:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.818 nvme0n1 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTdmODgyN2ZhODVjMWI4OTI5ZmExZjc5MTE1OGY0MDgrf3yK: 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTdmODgyN2ZhODVjMWI4OTI5ZmExZjc5MTE1OGY0MDgrf3yK: 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: ]] 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.818 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.076 nvme0n1 00:26:39.076 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.076 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.076 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.076 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.077 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.077 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.077 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.077 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.077 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.077 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.335 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.335 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.335 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:39.335 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.336 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:39.336 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:39.336 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:39.336 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzNmZkZjk1N2JjNDJhZWUwMDk1YTQzZWMwZWEwODhmNTYyZjVjNWU2N2U2NjIyIueGbg==: 00:26:39.336 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: 00:26:39.336 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:39.336 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:39.336 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzNmZkZjk1N2JjNDJhZWUwMDk1YTQzZWMwZWEwODhmNTYyZjVjNWU2N2U2NjIyIueGbg==: 00:26:39.336 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: ]] 00:26:39.336 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: 00:26:39.336 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:39.336 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.336 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:39.336 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:39.336 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:39.336 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.336 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:39.336 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.336 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.336 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.336 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.336 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:39.336 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:39.336 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:39.336 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.336 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.336 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:39.336 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:39.336 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:39.336 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:39.336 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:39.336 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:39.336 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.336 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.595 nvme0n1 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2JlNTQwNTBhNzg5NmUyZGIxYzY5Nzg1MGYyYzBlYjnx3O7b: 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2JlNTQwNTBhNzg5NmUyZGIxYzY5Nzg1MGYyYzBlYjnx3O7b: 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: ]] 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:39.595 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:39.596 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.596 19:17:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.854 nvme0n1 00:26:39.854 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.854 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.854 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.854 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.854 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.854 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.113 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.113 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.113 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.113 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.113 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.113 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.113 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:40.113 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.113 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:40.113 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:40.113 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:40.114 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGMxYTZjYjk2MzRhYWY2MjRlMDU4MjdjNDZiNjI1YjA4MTI5NjFkNTJjOTAwYWNh9CxZfA==: 00:26:40.114 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: 00:26:40.114 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:40.114 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:40.114 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGMxYTZjYjk2MzRhYWY2MjRlMDU4MjdjNDZiNjI1YjA4MTI5NjFkNTJjOTAwYWNh9CxZfA==: 00:26:40.114 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: ]] 00:26:40.114 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: 00:26:40.114 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:40.114 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.114 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:40.114 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:40.114 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:40.114 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.114 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:40.114 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.114 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.114 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.114 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.114 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:40.114 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:40.114 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:40.114 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.114 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.114 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:40.114 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:40.114 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:40.114 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:40.114 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:40.114 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:40.114 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.114 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.373 nvme0n1 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjZiODc0YTMwMjc0OGIzMDJmODMwMGM0YmRmMGFlOGI1ZjAxNjM4MmYwNDg2Y2MzZjIwYjQzMTE2ODkwZTM5YzuWhoI=: 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjZiODc0YTMwMjc0OGIzMDJmODMwMGM0YmRmMGFlOGI1ZjAxNjM4MmYwNDg2Y2MzZjIwYjQzMTE2ODkwZTM5YzuWhoI=: 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.373 19:17:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.632 nvme0n1 00:26:40.632 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.632 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.632 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.632 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.632 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.632 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.890 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTdmODgyN2ZhODVjMWI4OTI5ZmExZjc5MTE1OGY0MDgrf3yK: 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTdmODgyN2ZhODVjMWI4OTI5ZmExZjc5MTE1OGY0MDgrf3yK: 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: ]] 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.891 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.149 nvme0n1 00:26:41.149 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.149 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.149 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.149 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.149 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.149 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzNmZkZjk1N2JjNDJhZWUwMDk1YTQzZWMwZWEwODhmNTYyZjVjNWU2N2U2NjIyIueGbg==: 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzNmZkZjk1N2JjNDJhZWUwMDk1YTQzZWMwZWEwODhmNTYyZjVjNWU2N2U2NjIyIueGbg==: 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: ]] 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.407 19:17:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.666 nvme0n1 00:26:41.666 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.666 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.666 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.666 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.666 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.666 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.666 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.666 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.666 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.666 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2JlNTQwNTBhNzg5NmUyZGIxYzY5Nzg1MGYyYzBlYjnx3O7b: 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2JlNTQwNTBhNzg5NmUyZGIxYzY5Nzg1MGYyYzBlYjnx3O7b: 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: ]] 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.925 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.183 nvme0n1 00:26:42.183 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.183 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.183 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.183 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.183 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.183 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGMxYTZjYjk2MzRhYWY2MjRlMDU4MjdjNDZiNjI1YjA4MTI5NjFkNTJjOTAwYWNh9CxZfA==: 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGMxYTZjYjk2MzRhYWY2MjRlMDU4MjdjNDZiNjI1YjA4MTI5NjFkNTJjOTAwYWNh9CxZfA==: 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: ]] 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:42.442 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:42.443 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:42.443 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.443 19:17:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.702 nvme0n1 00:26:42.702 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.702 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.702 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.702 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.702 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.702 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjZiODc0YTMwMjc0OGIzMDJmODMwMGM0YmRmMGFlOGI1ZjAxNjM4MmYwNDg2Y2MzZjIwYjQzMTE2ODkwZTM5YzuWhoI=: 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjZiODc0YTMwMjc0OGIzMDJmODMwMGM0YmRmMGFlOGI1ZjAxNjM4MmYwNDg2Y2MzZjIwYjQzMTE2ODkwZTM5YzuWhoI=: 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.961 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.220 nvme0n1 00:26:43.220 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.220 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.220 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.220 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.220 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.220 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.479 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.479 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.479 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.479 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.479 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.479 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:43.479 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.479 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:43.479 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.479 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:43.479 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:43.479 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:43.479 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTdmODgyN2ZhODVjMWI4OTI5ZmExZjc5MTE1OGY0MDgrf3yK: 00:26:43.479 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: 00:26:43.480 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:43.480 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:43.480 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTdmODgyN2ZhODVjMWI4OTI5ZmExZjc5MTE1OGY0MDgrf3yK: 00:26:43.480 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: ]] 00:26:43.480 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: 00:26:43.480 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:43.480 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.480 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:43.480 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:43.480 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:43.480 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.480 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:43.480 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.480 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.480 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.480 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.480 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:43.480 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:43.480 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:43.480 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.480 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.480 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:43.480 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:43.480 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:43.480 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:43.480 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:43.480 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:43.480 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.480 19:17:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.048 nvme0n1 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzNmZkZjk1N2JjNDJhZWUwMDk1YTQzZWMwZWEwODhmNTYyZjVjNWU2N2U2NjIyIueGbg==: 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzNmZkZjk1N2JjNDJhZWUwMDk1YTQzZWMwZWEwODhmNTYyZjVjNWU2N2U2NjIyIueGbg==: 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: ]] 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.048 19:17:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.985 nvme0n1 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2JlNTQwNTBhNzg5NmUyZGIxYzY5Nzg1MGYyYzBlYjnx3O7b: 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2JlNTQwNTBhNzg5NmUyZGIxYzY5Nzg1MGYyYzBlYjnx3O7b: 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: ]] 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.985 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.553 nvme0n1 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGMxYTZjYjk2MzRhYWY2MjRlMDU4MjdjNDZiNjI1YjA4MTI5NjFkNTJjOTAwYWNh9CxZfA==: 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGMxYTZjYjk2MzRhYWY2MjRlMDU4MjdjNDZiNjI1YjA4MTI5NjFkNTJjOTAwYWNh9CxZfA==: 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: ]] 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.553 19:17:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.121 nvme0n1 00:26:46.121 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.121 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.121 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.121 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.121 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjZiODc0YTMwMjc0OGIzMDJmODMwMGM0YmRmMGFlOGI1ZjAxNjM4MmYwNDg2Y2MzZjIwYjQzMTE2ODkwZTM5YzuWhoI=: 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjZiODc0YTMwMjc0OGIzMDJmODMwMGM0YmRmMGFlOGI1ZjAxNjM4MmYwNDg2Y2MzZjIwYjQzMTE2ODkwZTM5YzuWhoI=: 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.381 19:17:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.949 nvme0n1 00:26:46.949 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.949 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.949 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.949 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.949 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.949 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.949 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.949 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTdmODgyN2ZhODVjMWI4OTI5ZmExZjc5MTE1OGY0MDgrf3yK: 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTdmODgyN2ZhODVjMWI4OTI5ZmExZjc5MTE1OGY0MDgrf3yK: 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: ]] 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.950 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.209 nvme0n1 00:26:47.209 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.209 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.209 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.209 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.209 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.209 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.209 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.209 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.209 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.209 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.209 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.209 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.209 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:47.209 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.209 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:47.209 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:47.209 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:47.209 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzNmZkZjk1N2JjNDJhZWUwMDk1YTQzZWMwZWEwODhmNTYyZjVjNWU2N2U2NjIyIueGbg==: 00:26:47.209 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: 00:26:47.209 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:47.209 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:47.209 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzNmZkZjk1N2JjNDJhZWUwMDk1YTQzZWMwZWEwODhmNTYyZjVjNWU2N2U2NjIyIueGbg==: 00:26:47.209 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: ]] 00:26:47.209 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: 00:26:47.209 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:47.209 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.209 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:47.209 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:47.209 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:47.210 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.210 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:47.210 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.210 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.210 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.210 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.210 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:47.210 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:47.210 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:47.210 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.210 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.210 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:47.468 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:47.468 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:47.468 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:47.468 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:47.468 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:47.468 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.468 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.468 nvme0n1 00:26:47.468 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.468 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.468 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.468 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.468 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.468 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.468 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.468 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.468 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.468 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2JlNTQwNTBhNzg5NmUyZGIxYzY5Nzg1MGYyYzBlYjnx3O7b: 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2JlNTQwNTBhNzg5NmUyZGIxYzY5Nzg1MGYyYzBlYjnx3O7b: 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: ]] 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.728 19:17:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.728 nvme0n1 00:26:47.728 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.728 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.728 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.728 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.728 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGMxYTZjYjk2MzRhYWY2MjRlMDU4MjdjNDZiNjI1YjA4MTI5NjFkNTJjOTAwYWNh9CxZfA==: 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGMxYTZjYjk2MzRhYWY2MjRlMDU4MjdjNDZiNjI1YjA4MTI5NjFkNTJjOTAwYWNh9CxZfA==: 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: ]] 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.988 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.246 nvme0n1 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjZiODc0YTMwMjc0OGIzMDJmODMwMGM0YmRmMGFlOGI1ZjAxNjM4MmYwNDg2Y2MzZjIwYjQzMTE2ODkwZTM5YzuWhoI=: 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjZiODc0YTMwMjc0OGIzMDJmODMwMGM0YmRmMGFlOGI1ZjAxNjM4MmYwNDg2Y2MzZjIwYjQzMTE2ODkwZTM5YzuWhoI=: 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.246 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.504 nvme0n1 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTdmODgyN2ZhODVjMWI4OTI5ZmExZjc5MTE1OGY0MDgrf3yK: 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTdmODgyN2ZhODVjMWI4OTI5ZmExZjc5MTE1OGY0MDgrf3yK: 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: ]] 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.504 19:17:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.762 nvme0n1 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzNmZkZjk1N2JjNDJhZWUwMDk1YTQzZWMwZWEwODhmNTYyZjVjNWU2N2U2NjIyIueGbg==: 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzNmZkZjk1N2JjNDJhZWUwMDk1YTQzZWMwZWEwODhmNTYyZjVjNWU2N2U2NjIyIueGbg==: 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: ]] 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.762 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.019 nvme0n1 00:26:49.019 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.019 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.019 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.019 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.019 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.019 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.019 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.019 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.019 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.019 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.277 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.277 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.277 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:49.277 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.277 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:49.277 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:49.277 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:49.277 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2JlNTQwNTBhNzg5NmUyZGIxYzY5Nzg1MGYyYzBlYjnx3O7b: 00:26:49.277 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: 00:26:49.277 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:49.277 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:49.277 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2JlNTQwNTBhNzg5NmUyZGIxYzY5Nzg1MGYyYzBlYjnx3O7b: 00:26:49.277 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: ]] 00:26:49.277 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: 00:26:49.277 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:49.277 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.277 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:49.277 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:49.277 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:49.278 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.278 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:49.278 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.278 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.278 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.278 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.278 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:49.278 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:49.278 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:49.278 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.278 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.278 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:49.278 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:49.278 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:49.278 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:49.278 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:49.278 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:49.278 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.278 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.536 nvme0n1 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGMxYTZjYjk2MzRhYWY2MjRlMDU4MjdjNDZiNjI1YjA4MTI5NjFkNTJjOTAwYWNh9CxZfA==: 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGMxYTZjYjk2MzRhYWY2MjRlMDU4MjdjNDZiNjI1YjA4MTI5NjFkNTJjOTAwYWNh9CxZfA==: 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: ]] 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:49.536 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:49.537 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.537 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.537 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:49.537 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:49.537 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:49.537 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:49.537 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:49.537 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:49.537 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.537 19:17:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.795 nvme0n1 00:26:49.795 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.795 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.795 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.795 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.795 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.795 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.795 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.795 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.795 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.795 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.795 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.795 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.795 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:49.795 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.795 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:49.795 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:49.795 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:49.795 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjZiODc0YTMwMjc0OGIzMDJmODMwMGM0YmRmMGFlOGI1ZjAxNjM4MmYwNDg2Y2MzZjIwYjQzMTE2ODkwZTM5YzuWhoI=: 00:26:49.795 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:49.795 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:49.795 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:49.795 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjZiODc0YTMwMjc0OGIzMDJmODMwMGM0YmRmMGFlOGI1ZjAxNjM4MmYwNDg2Y2MzZjIwYjQzMTE2ODkwZTM5YzuWhoI=: 00:26:49.795 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:49.795 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:49.795 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.795 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:49.795 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:49.796 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:49.796 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.796 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:49.796 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.796 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.796 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.796 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.796 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:49.796 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:49.796 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:49.796 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.796 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.796 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:49.796 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:49.796 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:49.796 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:49.796 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:49.796 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:49.796 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.796 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.055 nvme0n1 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTdmODgyN2ZhODVjMWI4OTI5ZmExZjc5MTE1OGY0MDgrf3yK: 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTdmODgyN2ZhODVjMWI4OTI5ZmExZjc5MTE1OGY0MDgrf3yK: 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: ]] 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.055 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.623 nvme0n1 00:26:50.623 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.623 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.623 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.623 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.623 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.623 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.623 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.623 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.623 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.623 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.623 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.623 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzNmZkZjk1N2JjNDJhZWUwMDk1YTQzZWMwZWEwODhmNTYyZjVjNWU2N2U2NjIyIueGbg==: 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzNmZkZjk1N2JjNDJhZWUwMDk1YTQzZWMwZWEwODhmNTYyZjVjNWU2N2U2NjIyIueGbg==: 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: ]] 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.624 19:17:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.883 nvme0n1 00:26:50.883 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.883 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.883 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.883 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.883 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.883 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.883 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.883 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.883 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.883 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.883 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.883 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.883 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:50.883 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.883 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:50.883 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:50.883 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:50.883 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2JlNTQwNTBhNzg5NmUyZGIxYzY5Nzg1MGYyYzBlYjnx3O7b: 00:26:50.883 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: 00:26:50.883 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:50.884 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:50.884 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2JlNTQwNTBhNzg5NmUyZGIxYzY5Nzg1MGYyYzBlYjnx3O7b: 00:26:50.884 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: ]] 00:26:50.884 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: 00:26:50.884 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:50.884 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.884 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:50.884 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:50.884 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:50.884 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.884 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:50.884 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.884 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.884 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.884 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.884 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:50.884 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:50.884 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:50.884 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.884 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.884 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:50.884 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:50.884 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:50.884 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:50.884 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:50.884 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:50.884 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.884 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.450 nvme0n1 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGMxYTZjYjk2MzRhYWY2MjRlMDU4MjdjNDZiNjI1YjA4MTI5NjFkNTJjOTAwYWNh9CxZfA==: 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGMxYTZjYjk2MzRhYWY2MjRlMDU4MjdjNDZiNjI1YjA4MTI5NjFkNTJjOTAwYWNh9CxZfA==: 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: ]] 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.451 19:17:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.710 nvme0n1 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjZiODc0YTMwMjc0OGIzMDJmODMwMGM0YmRmMGFlOGI1ZjAxNjM4MmYwNDg2Y2MzZjIwYjQzMTE2ODkwZTM5YzuWhoI=: 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjZiODc0YTMwMjc0OGIzMDJmODMwMGM0YmRmMGFlOGI1ZjAxNjM4MmYwNDg2Y2MzZjIwYjQzMTE2ODkwZTM5YzuWhoI=: 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.710 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.969 nvme0n1 00:26:51.969 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.969 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.969 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.969 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.969 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTdmODgyN2ZhODVjMWI4OTI5ZmExZjc5MTE1OGY0MDgrf3yK: 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTdmODgyN2ZhODVjMWI4OTI5ZmExZjc5MTE1OGY0MDgrf3yK: 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: ]] 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.228 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.229 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:52.229 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:52.229 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:52.229 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:52.229 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:52.229 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:52.229 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.229 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.487 nvme0n1 00:26:52.487 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.487 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.487 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.487 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.487 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.487 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.746 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.746 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.747 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.747 19:17:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzNmZkZjk1N2JjNDJhZWUwMDk1YTQzZWMwZWEwODhmNTYyZjVjNWU2N2U2NjIyIueGbg==: 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzNmZkZjk1N2JjNDJhZWUwMDk1YTQzZWMwZWEwODhmNTYyZjVjNWU2N2U2NjIyIueGbg==: 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: ]] 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.747 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.005 nvme0n1 00:26:53.005 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.005 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.005 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.005 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.005 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.005 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.264 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.264 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.264 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.264 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.264 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.264 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.264 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:53.264 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.264 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:53.264 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:53.264 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:53.264 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2JlNTQwNTBhNzg5NmUyZGIxYzY5Nzg1MGYyYzBlYjnx3O7b: 00:26:53.264 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: 00:26:53.264 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:53.264 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:53.264 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2JlNTQwNTBhNzg5NmUyZGIxYzY5Nzg1MGYyYzBlYjnx3O7b: 00:26:53.264 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: ]] 00:26:53.264 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: 00:26:53.264 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:53.264 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.264 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:53.264 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:53.264 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:53.264 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.264 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:53.264 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.264 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.265 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.265 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.265 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:53.265 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:53.265 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:53.265 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.265 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.265 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:53.265 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:53.265 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:53.265 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:53.265 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:53.265 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:53.265 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.265 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.524 nvme0n1 00:26:53.524 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.524 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.524 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.524 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.524 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.782 19:17:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGMxYTZjYjk2MzRhYWY2MjRlMDU4MjdjNDZiNjI1YjA4MTI5NjFkNTJjOTAwYWNh9CxZfA==: 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGMxYTZjYjk2MzRhYWY2MjRlMDU4MjdjNDZiNjI1YjA4MTI5NjFkNTJjOTAwYWNh9CxZfA==: 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: ]] 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.783 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.351 nvme0n1 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjZiODc0YTMwMjc0OGIzMDJmODMwMGM0YmRmMGFlOGI1ZjAxNjM4MmYwNDg2Y2MzZjIwYjQzMTE2ODkwZTM5YzuWhoI=: 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjZiODc0YTMwMjc0OGIzMDJmODMwMGM0YmRmMGFlOGI1ZjAxNjM4MmYwNDg2Y2MzZjIwYjQzMTE2ODkwZTM5YzuWhoI=: 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.351 19:17:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.610 nvme0n1 00:26:54.610 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.610 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.610 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.610 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.610 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.610 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.610 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.610 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.610 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.610 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTdmODgyN2ZhODVjMWI4OTI5ZmExZjc5MTE1OGY0MDgrf3yK: 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTdmODgyN2ZhODVjMWI4OTI5ZmExZjc5MTE1OGY0MDgrf3yK: 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: ]] 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmE4YzM5MGNmNjZhOWY5YWM3MzgzMTNhZDUzOTZjZTcxMTFmZjY5M2JjMTFiMDhmYmJjOGYzMDIxNDc0YzQ1NPqzhqc=: 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.869 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.437 nvme0n1 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzNmZkZjk1N2JjNDJhZWUwMDk1YTQzZWMwZWEwODhmNTYyZjVjNWU2N2U2NjIyIueGbg==: 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzNmZkZjk1N2JjNDJhZWUwMDk1YTQzZWMwZWEwODhmNTYyZjVjNWU2N2U2NjIyIueGbg==: 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: ]] 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.437 19:17:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.373 nvme0n1 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2JlNTQwNTBhNzg5NmUyZGIxYzY5Nzg1MGYyYzBlYjnx3O7b: 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2JlNTQwNTBhNzg5NmUyZGIxYzY5Nzg1MGYyYzBlYjnx3O7b: 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: ]] 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWY4ZWRkM2U0ZjJiZWEzZjZkYWZhMjk2MzUxN2IyMmKERBIQ: 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.373 19:17:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.942 nvme0n1 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGMxYTZjYjk2MzRhYWY2MjRlMDU4MjdjNDZiNjI1YjA4MTI5NjFkNTJjOTAwYWNh9CxZfA==: 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGMxYTZjYjk2MzRhYWY2MjRlMDU4MjdjNDZiNjI1YjA4MTI5NjFkNTJjOTAwYWNh9CxZfA==: 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: ]] 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2I4NjA5ZmZhNDM3NDAyNWM4ZjFkYTEyN2VkMWM2MjB7C8xa: 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.942 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.510 nvme0n1 00:26:57.510 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.510 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.510 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.510 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.510 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.510 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.510 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.510 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.510 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.510 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.834 19:17:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.834 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.834 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:57.834 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.834 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.834 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:57.834 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:57.834 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjZiODc0YTMwMjc0OGIzMDJmODMwMGM0YmRmMGFlOGI1ZjAxNjM4MmYwNDg2Y2MzZjIwYjQzMTE2ODkwZTM5YzuWhoI=: 00:26:57.834 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:57.834 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.834 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:57.834 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjZiODc0YTMwMjc0OGIzMDJmODMwMGM0YmRmMGFlOGI1ZjAxNjM4MmYwNDg2Y2MzZjIwYjQzMTE2ODkwZTM5YzuWhoI=: 00:26:57.834 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:57.834 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:57.834 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.834 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.834 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:57.834 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:57.834 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.834 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:57.834 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.834 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.834 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.834 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.834 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:57.834 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:57.834 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:57.835 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.835 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.835 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:57.835 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:57.835 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:57.835 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:57.835 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:57.835 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:57.835 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.835 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.436 nvme0n1 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWUzNmZkZjk1N2JjNDJhZWUwMDk1YTQzZWMwZWEwODhmNTYyZjVjNWU2N2U2NjIyIueGbg==: 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWUzNmZkZjk1N2JjNDJhZWUwMDk1YTQzZWMwZWEwODhmNTYyZjVjNWU2N2U2NjIyIueGbg==: 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: ]] 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODVlNDI4YzI5NDVmNDVjNzBhYjJlODc0NWIwMDgwZjA0NjUwZmE5MzhlN2QwNWQ4qtZK0g==: 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.437 request: 00:26:58.437 { 00:26:58.437 "name": "nvme0", 00:26:58.437 "trtype": "rdma", 00:26:58.437 "traddr": "192.168.100.8", 00:26:58.437 "adrfam": "ipv4", 00:26:58.437 "trsvcid": "4420", 00:26:58.437 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:58.437 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:58.437 "prchk_reftag": false, 00:26:58.437 "prchk_guard": false, 00:26:58.437 "hdgst": false, 00:26:58.437 "ddgst": false, 00:26:58.437 "method": "bdev_nvme_attach_controller", 00:26:58.437 "req_id": 1 00:26:58.437 } 00:26:58.437 Got JSON-RPC error response 00:26:58.437 response: 00:26:58.437 { 00:26:58.437 "code": -5, 00:26:58.437 "message": "Input/output error" 00:26:58.437 } 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:58.437 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:58.742 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:58.742 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:58.742 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:58.742 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:58.742 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.742 19:17:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.742 request: 00:26:58.742 { 00:26:58.742 "name": "nvme0", 00:26:58.742 "trtype": "rdma", 00:26:58.743 "traddr": "192.168.100.8", 00:26:58.743 "adrfam": "ipv4", 00:26:58.743 "trsvcid": "4420", 00:26:58.743 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:58.743 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:58.743 "prchk_reftag": false, 00:26:58.743 "prchk_guard": false, 00:26:58.743 "hdgst": false, 00:26:58.743 "ddgst": false, 00:26:58.743 "dhchap_key": "key2", 00:26:58.743 "method": "bdev_nvme_attach_controller", 00:26:58.743 "req_id": 1 00:26:58.743 } 00:26:58.743 Got JSON-RPC error response 00:26:58.743 response: 00:26:58.743 { 00:26:58.743 "code": -5, 00:26:58.743 "message": "Input/output error" 00:26:58.743 } 00:26:58.743 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:58.743 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:58.743 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:58.743 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:58.743 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:58.743 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.743 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:58.743 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.743 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.743 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.743 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:58.743 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:58.743 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:58.743 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:58.743 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:58.743 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.743 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.743 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:58.743 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:58.743 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:58.743 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:58.743 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:58.743 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:58.743 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:58.743 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:58.743 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:58.743 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:58.743 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:58.743 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:58.743 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:58.743 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.743 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.074 request: 00:26:59.074 { 00:26:59.074 "name": "nvme0", 00:26:59.074 "trtype": "rdma", 00:26:59.074 "traddr": "192.168.100.8", 00:26:59.074 "adrfam": "ipv4", 00:26:59.074 "trsvcid": "4420", 00:26:59.074 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:59.074 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:59.074 "prchk_reftag": false, 00:26:59.074 "prchk_guard": false, 00:26:59.074 "hdgst": false, 00:26:59.074 "ddgst": false, 00:26:59.074 "dhchap_key": "key1", 00:26:59.074 "dhchap_ctrlr_key": "ckey2", 00:26:59.074 "method": "bdev_nvme_attach_controller", 00:26:59.074 "req_id": 1 00:26:59.074 } 00:26:59.074 Got JSON-RPC error response 00:26:59.074 response: 00:26:59.074 { 00:26:59.074 "code": -5, 00:26:59.074 "message": "Input/output error" 00:26:59.074 } 00:26:59.074 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:59.074 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:59.074 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:59.074 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:59.074 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:59.074 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:26:59.074 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:26:59.074 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:59.074 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:59.074 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:26:59.074 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:26:59.075 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:26:59.075 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:26:59.075 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:59.075 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:26:59.075 rmmod nvme_rdma 00:26:59.075 rmmod nvme_fabrics 00:26:59.075 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:59.075 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:26:59.075 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:26:59.075 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 893449 ']' 00:26:59.075 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 893449 00:26:59.075 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 893449 ']' 00:26:59.075 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 893449 00:26:59.075 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:26:59.075 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:59.075 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 893449 00:26:59.075 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:59.075 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:59.075 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 893449' 00:26:59.075 killing process with pid 893449 00:26:59.075 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 893449 00:26:59.075 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 893449 00:26:59.075 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:59.075 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:26:59.075 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:59.075 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:59.075 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:59.075 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:59.075 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:26:59.075 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:59.075 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:59.075 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:59.075 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:59.075 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:59.075 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:26:59.364 19:17:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:27:01.899 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:01.899 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:01.899 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:01.899 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:01.899 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:01.899 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:01.899 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:02.157 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:02.157 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:02.157 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:02.157 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:02.157 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:02.157 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:02.157 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:02.157 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:02.157 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:03.094 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:27:03.094 19:17:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.x8A /tmp/spdk.key-null.20E /tmp/spdk.key-sha256.Pq4 /tmp/spdk.key-sha384.MQu /tmp/spdk.key-sha512.YBX /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:27:03.094 19:17:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:27:06.385 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:06.385 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:06.385 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:06.385 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:06.385 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:06.385 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:06.385 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:06.385 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:06.385 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:06.385 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:06.385 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:06.385 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:06.385 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:06.385 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:06.385 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:06.385 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:06.385 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:06.385 00:27:06.385 real 0m54.295s 00:27:06.385 user 0m50.531s 00:27:06.385 sys 0m12.630s 00:27:06.385 19:17:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.386 ************************************ 00:27:06.386 END TEST nvmf_auth_host 00:27:06.386 ************************************ 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ rdma == \t\c\p ]] 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.386 ************************************ 00:27:06.386 START TEST nvmf_bdevperf 00:27:06.386 ************************************ 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:27:06.386 * Looking for test storage... 00:27:06.386 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:27:06.386 19:17:58 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:27:11.661 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:27:11.661 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:27:11.661 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:11.662 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:11.662 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:27:11.662 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:11.662 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:11.662 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:27:11.662 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:11.662 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.662 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:11.662 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:11.662 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.662 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:27:11.662 Found net devices under 0000:af:00.0: mlx_0_0 00:27:11.662 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.662 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:11.662 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.662 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:11.662 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:11.662 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.662 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:27:11.662 Found net devices under 0000:af:00.1: mlx_0_1 00:27:11.662 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.662 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:11.662 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:27:11.662 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:11.662 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:27:11.662 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:27:11.662 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # rdma_device_init 00:27:11.662 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:27:11.662 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # uname 00:27:11.662 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:27:11.662 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:27:11.662 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@63 -- # modprobe ib_core 00:27:11.662 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:27:11.662 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:27:11.662 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:27:11.662 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:27:11.921 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:27:11.921 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:27:11.921 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:11.921 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:27:11.921 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:11.921 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:11.921 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:27:11.922 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:11.922 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:27:11.922 altname enp175s0f0np0 00:27:11.922 altname ens801f0np0 00:27:11.922 inet 192.168.100.8/24 scope global mlx_0_0 00:27:11.922 valid_lft forever preferred_lft forever 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:27:11.922 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:11.922 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:27:11.922 altname enp175s0f1np1 00:27:11.922 altname ens801f1np1 00:27:11.922 inet 192.168.100.9/24 scope global mlx_0_1 00:27:11.922 valid_lft forever preferred_lft forever 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:27:11.922 192.168.100.9' 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:27:11.922 192.168.100.9' 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@457 -- # head -n 1 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:27:11.922 192.168.100.9' 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@458 -- # tail -n +2 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@458 -- # head -n 1 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=907446 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 907446 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 907446 ']' 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:11.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:11.922 19:18:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:11.922 [2024-07-25 19:18:04.347021] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:11.922 [2024-07-25 19:18:04.347064] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:11.922 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.182 [2024-07-25 19:18:04.415792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:12.182 [2024-07-25 19:18:04.493769] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:12.182 [2024-07-25 19:18:04.493809] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:12.182 [2024-07-25 19:18:04.493816] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:12.182 [2024-07-25 19:18:04.493822] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:12.182 [2024-07-25 19:18:04.493828] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:12.182 [2024-07-25 19:18:04.493944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:12.182 [2024-07-25 19:18:04.493967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:12.182 [2024-07-25 19:18:04.493969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:12.748 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:12.748 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:27:12.748 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:12.748 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:12.748 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:13.006 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:13.006 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:13.006 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.006 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:13.006 [2024-07-25 19:18:05.253764] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x130e580/0x1312a70) succeed. 00:27:13.006 [2024-07-25 19:18:05.262794] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x130fb20/0x1354110) succeed. 00:27:13.006 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.006 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:13.006 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.006 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:13.006 Malloc0 00:27:13.006 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.006 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:13.006 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.006 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:13.006 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.006 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:13.006 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.006 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:13.006 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.006 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:13.006 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.006 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:13.006 [2024-07-25 19:18:05.399972] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:13.006 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.006 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:13.006 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:13.006 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:27:13.006 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:27:13.006 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.006 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.006 { 00:27:13.006 "params": { 00:27:13.006 "name": "Nvme$subsystem", 00:27:13.006 "trtype": "$TEST_TRANSPORT", 00:27:13.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.006 "adrfam": "ipv4", 00:27:13.006 "trsvcid": "$NVMF_PORT", 00:27:13.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.006 "hdgst": ${hdgst:-false}, 00:27:13.006 "ddgst": ${ddgst:-false} 00:27:13.006 }, 00:27:13.006 "method": "bdev_nvme_attach_controller" 00:27:13.006 } 00:27:13.006 EOF 00:27:13.006 )") 00:27:13.006 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:27:13.006 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:27:13.006 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:27:13.006 19:18:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:13.006 "params": { 00:27:13.006 "name": "Nvme1", 00:27:13.006 "trtype": "rdma", 00:27:13.006 "traddr": "192.168.100.8", 00:27:13.006 "adrfam": "ipv4", 00:27:13.006 "trsvcid": "4420", 00:27:13.006 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:13.006 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:13.006 "hdgst": false, 00:27:13.006 "ddgst": false 00:27:13.006 }, 00:27:13.006 "method": "bdev_nvme_attach_controller" 00:27:13.006 }' 00:27:13.006 [2024-07-25 19:18:05.448125] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:13.006 [2024-07-25 19:18:05.448171] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid907588 ] 00:27:13.006 EAL: No free 2048 kB hugepages reported on node 1 00:27:13.264 [2024-07-25 19:18:05.516819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.264 [2024-07-25 19:18:05.589500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.522 Running I/O for 1 seconds... 00:27:14.456 00:27:14.456 Latency(us) 00:27:14.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.456 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:14.456 Verification LBA range: start 0x0 length 0x4000 00:27:14.456 Nvme1n1 : 1.00 17596.59 68.74 0.00 0.00 7233.54 2592.95 11853.47 00:27:14.456 =================================================================================================================== 00:27:14.456 Total : 17596.59 68.74 0.00 0.00 7233.54 2592.95 11853.47 00:27:14.714 19:18:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=907867 00:27:14.715 19:18:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:14.715 19:18:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:14.715 19:18:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:14.715 19:18:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:27:14.715 19:18:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:27:14.715 19:18:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:14.715 19:18:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:14.715 { 00:27:14.715 "params": { 00:27:14.715 "name": "Nvme$subsystem", 00:27:14.715 "trtype": "$TEST_TRANSPORT", 00:27:14.715 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:14.715 "adrfam": "ipv4", 00:27:14.715 "trsvcid": "$NVMF_PORT", 00:27:14.715 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:14.715 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:14.715 "hdgst": ${hdgst:-false}, 00:27:14.715 "ddgst": ${ddgst:-false} 00:27:14.715 }, 00:27:14.715 "method": "bdev_nvme_attach_controller" 00:27:14.715 } 00:27:14.715 EOF 00:27:14.715 )") 00:27:14.715 19:18:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:27:14.715 19:18:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:27:14.715 19:18:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:27:14.715 19:18:06 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:14.715 "params": { 00:27:14.715 "name": "Nvme1", 00:27:14.715 "trtype": "rdma", 00:27:14.715 "traddr": "192.168.100.8", 00:27:14.715 "adrfam": "ipv4", 00:27:14.715 "trsvcid": "4420", 00:27:14.715 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:14.715 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:14.715 "hdgst": false, 00:27:14.715 "ddgst": false 00:27:14.715 }, 00:27:14.715 "method": "bdev_nvme_attach_controller" 00:27:14.715 }' 00:27:14.715 [2024-07-25 19:18:07.020102] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:14.715 [2024-07-25 19:18:07.020154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid907867 ] 00:27:14.715 EAL: No free 2048 kB hugepages reported on node 1 00:27:14.715 [2024-07-25 19:18:07.090372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.715 [2024-07-25 19:18:07.161933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.973 Running I/O for 15 seconds... 00:27:18.258 19:18:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 907446 00:27:18.258 19:18:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:18.828 [2024-07-25 19:18:11.005228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:111616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x187c00 00:27:18.828 [2024-07-25 19:18:11.005263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.828 [2024-07-25 19:18:11.005279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:111624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x187c00 00:27:18.828 [2024-07-25 19:18:11.005304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.828 [2024-07-25 19:18:11.005313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:111632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x187c00 00:27:18.828 [2024-07-25 19:18:11.005320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.828 [2024-07-25 19:18:11.005329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:111640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x187c00 00:27:18.828 [2024-07-25 19:18:11.005336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.828 [2024-07-25 19:18:11.005344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:111648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x187c00 00:27:18.828 [2024-07-25 19:18:11.005350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.828 [2024-07-25 19:18:11.005358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:111656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x187c00 00:27:18.828 [2024-07-25 19:18:11.005365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.828 [2024-07-25 19:18:11.005373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:111664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x187c00 00:27:18.828 [2024-07-25 19:18:11.005384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.828 [2024-07-25 19:18:11.005392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:111672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x187c00 00:27:18.828 [2024-07-25 19:18:11.005399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.828 [2024-07-25 19:18:11.005407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:111680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x187c00 00:27:18.828 [2024-07-25 19:18:11.005414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.828 [2024-07-25 19:18:11.005422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:111688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x187c00 00:27:18.828 [2024-07-25 19:18:11.005429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.828 [2024-07-25 19:18:11.005437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:111696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x187c00 00:27:18.828 [2024-07-25 19:18:11.005444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.828 [2024-07-25 19:18:11.005452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:111704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x187c00 00:27:18.828 [2024-07-25 19:18:11.005458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.828 [2024-07-25 19:18:11.005466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:111712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x187c00 00:27:18.828 [2024-07-25 19:18:11.005473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.828 [2024-07-25 19:18:11.005481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:111720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x187c00 00:27:18.828 [2024-07-25 19:18:11.005487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.828 [2024-07-25 19:18:11.005495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:111728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x187c00 00:27:18.828 [2024-07-25 19:18:11.005501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.828 [2024-07-25 19:18:11.005510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:111736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x187c00 00:27:18.828 [2024-07-25 19:18:11.005517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.828 [2024-07-25 19:18:11.005525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x187c00 00:27:18.828 [2024-07-25 19:18:11.005532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.828 [2024-07-25 19:18:11.005540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x187c00 00:27:18.828 [2024-07-25 19:18:11.005546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.828 [2024-07-25 19:18:11.005556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:111760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x187c00 00:27:18.828 [2024-07-25 19:18:11.005563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.828 [2024-07-25 19:18:11.005571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:111768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x187c00 00:27:18.828 [2024-07-25 19:18:11.005578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.828 [2024-07-25 19:18:11.005587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:111776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x187c00 00:27:18.828 [2024-07-25 19:18:11.005593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.828 [2024-07-25 19:18:11.005602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:111784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.005608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.005616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:111792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.005623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.005631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:111800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.005637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.005645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:111808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.005651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.005660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:111816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.005666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.005674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:111824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.005680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.005688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:111832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.005695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.005703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:111840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.005709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.005717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:111848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.005725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.005733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:111856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.005739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.005747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:111864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.005754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.005761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:111872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.005768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.005776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:111880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.005783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.005791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:111888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.005798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.005806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:111896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.005812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.005820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:111904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.005826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.005834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:111912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.005840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.005849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:111920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.005855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.005863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:111928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.005869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.005877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:111936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.005884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.005893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:111944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.005903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.005911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:111952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.005918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.005926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:111960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.005932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.005940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:111968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.005947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.005955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.005962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.005970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:111984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.005976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.005984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:111992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.005996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.006004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:112000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.006011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.006019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.006025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.006033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:112016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.006040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.006048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:112024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.006054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.006067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:112032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.006074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.006082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.006088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.006096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:112048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.006102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.006110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.006116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.829 [2024-07-25 19:18:11.006124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x187c00 00:27:18.829 [2024-07-25 19:18:11.006131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:112104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:112128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:112136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:112152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:112160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:112168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:112184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:112200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:112264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:112272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:112280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:112296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:112304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:112320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:112328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:112336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x187c00 00:27:18.830 [2024-07-25 19:18:11.006647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.830 [2024-07-25 19:18:11.006655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:112352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.006662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.006672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.006678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.006686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.006693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.006700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:112376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.006707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.006715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.006723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.006733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:112392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.006740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.006748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.006755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.006763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.006770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.006778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:112416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.006784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.006792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.006799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.006806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:112432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.006813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.006822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.006828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.006836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:112448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.006843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.006851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:112456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.006857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.006866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.006872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.006880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.006886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.006894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:112480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.006907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.006916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:112488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.006923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.006931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.006938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.006946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:112504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.006952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.006960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.006967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.006975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:112520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.006982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.006990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.006996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.007004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.015160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.015173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.015182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.015191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:112552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.015199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.015208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.015216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.015225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:112568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.015233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.015244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.015251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.015261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:112584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.015269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.015278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.015285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.015294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:112600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.015302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.015311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:112608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.015318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.015329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.015336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.015345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:112624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x187c00 00:27:18.831 [2024-07-25 19:18:11.015353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fb2bf000 sqhd:52b0 p:0 m:0 dnr:0 00:27:18.831 [2024-07-25 19:18:11.016748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:18.831 [2024-07-25 19:18:11.016761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:18.832 [2024-07-25 19:18:11.016768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112632 len:8 PRP1 0x0 PRP2 0x0 00:27:18.832 [2024-07-25 19:18:11.016776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.832 [2024-07-25 19:18:11.016818] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:27:18.832 [2024-07-25 19:18:11.016847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:18.832 [2024-07-25 19:18:11.016857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.832 [2024-07-25 19:18:11.016866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:18.832 [2024-07-25 19:18:11.016873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.832 [2024-07-25 19:18:11.016881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:18.832 [2024-07-25 19:18:11.016890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.832 [2024-07-25 19:18:11.016904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:18.832 [2024-07-25 19:18:11.016918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.832 [2024-07-25 19:18:11.034224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:18.832 [2024-07-25 19:18:11.034271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:18.832 [2024-07-25 19:18:11.034294] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:18.832 [2024-07-25 19:18:11.037670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:18.832 [2024-07-25 19:18:11.040686] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:18.832 [2024-07-25 19:18:11.040704] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:18.832 [2024-07-25 19:18:11.040710] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:27:19.769 [2024-07-25 19:18:12.044786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:19.769 [2024-07-25 19:18:12.044840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:19.769 [2024-07-25 19:18:12.045061] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:19.769 [2024-07-25 19:18:12.045071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:19.769 [2024-07-25 19:18:12.045078] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:27:19.769 [2024-07-25 19:18:12.047825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:19.769 [2024-07-25 19:18:12.051734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:19.769 [2024-07-25 19:18:12.054542] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:19.769 [2024-07-25 19:18:12.054559] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:19.769 [2024-07-25 19:18:12.054565] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:27:20.706 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 907446 Killed "${NVMF_APP[@]}" "$@" 00:27:20.706 19:18:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:20.706 19:18:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:20.706 19:18:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:20.706 19:18:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:20.706 19:18:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:20.706 19:18:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=909351 00:27:20.706 19:18:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 909351 00:27:20.706 19:18:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:20.706 19:18:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 909351 ']' 00:27:20.706 19:18:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:20.706 19:18:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:20.706 19:18:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:20.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:20.706 19:18:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:20.706 19:18:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:20.706 [2024-07-25 19:18:13.034814] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:20.706 [2024-07-25 19:18:13.034860] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:20.706 [2024-07-25 19:18:13.058339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:20.706 [2024-07-25 19:18:13.058363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:20.706 [2024-07-25 19:18:13.058544] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:20.706 [2024-07-25 19:18:13.058553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:20.706 [2024-07-25 19:18:13.058561] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:27:20.706 EAL: No free 2048 kB hugepages reported on node 1 00:27:20.706 [2024-07-25 19:18:13.061403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:20.706 [2024-07-25 19:18:13.065269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:20.706 [2024-07-25 19:18:13.067825] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:20.706 [2024-07-25 19:18:13.067845] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:20.706 [2024-07-25 19:18:13.067852] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:27:20.706 [2024-07-25 19:18:13.103205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:20.964 [2024-07-25 19:18:13.179698] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:20.965 [2024-07-25 19:18:13.179735] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:20.965 [2024-07-25 19:18:13.179742] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:20.965 [2024-07-25 19:18:13.179748] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:20.965 [2024-07-25 19:18:13.179752] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:20.965 [2024-07-25 19:18:13.179812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:20.965 [2024-07-25 19:18:13.179930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:20.965 [2024-07-25 19:18:13.179931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:21.532 19:18:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:21.532 19:18:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:27:21.532 19:18:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:21.532 19:18:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:21.532 19:18:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:21.532 19:18:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:21.532 19:18:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:21.532 19:18:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.532 19:18:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:21.532 [2024-07-25 19:18:13.944202] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x8db580/0x8dfa70) succeed. 00:27:21.532 [2024-07-25 19:18:13.954465] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x8dcb20/0x921110) succeed. 00:27:21.792 19:18:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.792 19:18:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:21.792 19:18:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.792 19:18:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:21.792 [2024-07-25 19:18:14.071732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:21.792 [2024-07-25 19:18:14.071759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:21.792 [2024-07-25 19:18:14.071946] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:21.792 [2024-07-25 19:18:14.071956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:21.792 [2024-07-25 19:18:14.071964] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:27:21.792 [2024-07-25 19:18:14.071977] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:21.792 [2024-07-25 19:18:14.074817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:21.792 Malloc0 00:27:21.792 [2024-07-25 19:18:14.085112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:21.792 19:18:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.792 19:18:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:21.792 19:18:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.792 19:18:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:21.792 [2024-07-25 19:18:14.087668] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:21.792 [2024-07-25 19:18:14.087689] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:21.792 [2024-07-25 19:18:14.087696] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:27:21.792 19:18:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.792 19:18:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:21.792 19:18:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.792 19:18:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:21.792 19:18:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.792 19:18:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:21.792 19:18:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.792 19:18:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:21.792 [2024-07-25 19:18:14.109000] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:21.792 19:18:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.792 19:18:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 907867 00:27:22.729 [2024-07-25 19:18:15.091362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:22.729 [2024-07-25 19:18:15.091384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:22.729 [2024-07-25 19:18:15.091566] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:22.729 [2024-07-25 19:18:15.091575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:22.729 [2024-07-25 19:18:15.091583] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:27:22.729 [2024-07-25 19:18:15.091596] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:22.729 [2024-07-25 19:18:15.094431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:22.729 [2024-07-25 19:18:15.104643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:22.729 [2024-07-25 19:18:15.146323] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:30.875 00:27:30.875 Latency(us) 00:27:30.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:30.875 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:30.875 Verification LBA range: start 0x0 length 0x4000 00:27:30.875 Nvme1n1 : 15.00 11614.79 45.37 13180.37 0.00 5141.99 466.59 1057694.05 00:27:30.875 =================================================================================================================== 00:27:30.875 Total : 11614.79 45.37 13180.37 0.00 5141.99 466.59 1057694.05 00:27:30.875 19:18:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:30.875 19:18:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:30.875 19:18:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.875 19:18:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:30.875 19:18:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.875 19:18:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:30.875 19:18:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:30.875 19:18:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:30.875 19:18:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:27:30.875 19:18:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:27:30.875 19:18:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:27:30.875 19:18:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:27:30.875 19:18:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:30.875 19:18:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:27:30.875 rmmod nvme_rdma 00:27:30.875 rmmod nvme_fabrics 00:27:30.875 19:18:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:30.875 19:18:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:27:30.875 19:18:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:27:30.875 19:18:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 909351 ']' 00:27:30.875 19:18:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 909351 00:27:30.875 19:18:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 909351 ']' 00:27:30.875 19:18:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 909351 00:27:30.875 19:18:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:27:30.875 19:18:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:30.875 19:18:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 909351 00:27:30.875 19:18:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:30.875 19:18:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:30.875 19:18:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 909351' 00:27:30.875 killing process with pid 909351 00:27:30.875 19:18:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 909351 00:27:30.875 19:18:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 909351 00:27:30.875 19:18:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:30.875 19:18:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:27:30.875 00:27:30.875 real 0m24.594s 00:27:30.875 user 1m4.470s 00:27:30.875 sys 0m5.456s 00:27:30.875 19:18:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:30.875 19:18:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:30.875 ************************************ 00:27:30.875 END TEST nvmf_bdevperf 00:27:30.875 ************************************ 00:27:30.875 19:18:23 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:27:30.875 19:18:23 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:30.875 19:18:23 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:30.875 19:18:23 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.875 ************************************ 00:27:30.875 START TEST nvmf_target_disconnect 00:27:30.875 ************************************ 00:27:30.875 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:27:30.875 * Looking for test storage... 00:27:30.875 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:30.875 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:30.875 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:30.875 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:27:30.876 19:18:23 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:27:37.445 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:27:37.445 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:27:37.445 Found net devices under 0000:af:00.0: mlx_0_0 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:27:37.445 Found net devices under 0000:af:00.1: mlx_0_1 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # uname 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:27:37.445 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:27:37.446 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:37.446 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:27:37.446 altname enp175s0f0np0 00:27:37.446 altname ens801f0np0 00:27:37.446 inet 192.168.100.8/24 scope global mlx_0_0 00:27:37.446 valid_lft forever preferred_lft forever 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:27:37.446 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:37.446 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:27:37.446 altname enp175s0f1np1 00:27:37.446 altname ens801f1np1 00:27:37.446 inet 192.168.100.9/24 scope global mlx_0_1 00:27:37.446 valid_lft forever preferred_lft forever 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:27:37.446 192.168.100.9' 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:27:37.446 192.168.100.9' 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:27:37.446 192.168.100.9' 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:37.446 19:18:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:37.446 ************************************ 00:27:37.446 START TEST nvmf_target_disconnect_tc1 00:27:37.446 ************************************ 00:27:37.446 19:18:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:27:37.447 19:18:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:37.447 19:18:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:27:37.447 19:18:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:37.447 19:18:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:27:37.447 19:18:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:37.447 19:18:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:27:37.447 19:18:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:37.447 19:18:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:27:37.447 19:18:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:37.447 19:18:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:27:37.447 19:18:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:27:37.447 19:18:29 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:37.447 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.447 [2024-07-25 19:18:29.126366] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:37.447 [2024-07-25 19:18:29.126430] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:37.447 [2024-07-25 19:18:29.126438] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:27:37.705 [2024-07-25 19:18:30.130363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:37.705 [2024-07-25 19:18:30.130430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:27:37.705 [2024-07-25 19:18:30.130457] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:27:37.705 [2024-07-25 19:18:30.130524] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:37.705 [2024-07-25 19:18:30.130532] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:27:37.705 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:27:37.705 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:37.706 Initializing NVMe Controllers 00:27:37.706 19:18:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:27:37.706 19:18:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:37.706 19:18:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:37.706 19:18:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:37.706 00:27:37.706 real 0m1.138s 00:27:37.706 user 0m0.928s 00:27:37.706 sys 0m0.199s 00:27:37.706 19:18:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:37.706 19:18:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:37.706 ************************************ 00:27:37.706 END TEST nvmf_target_disconnect_tc1 00:27:37.706 ************************************ 00:27:37.965 19:18:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:37.965 19:18:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:37.965 19:18:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:37.965 19:18:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:37.965 ************************************ 00:27:37.965 START TEST nvmf_target_disconnect_tc2 00:27:37.965 ************************************ 00:27:37.965 19:18:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:27:37.965 19:18:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:27:37.965 19:18:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:37.965 19:18:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:37.965 19:18:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:37.965 19:18:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:37.965 19:18:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=914241 00:27:37.965 19:18:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 914241 00:27:37.965 19:18:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:37.965 19:18:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 914241 ']' 00:27:37.965 19:18:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:37.965 19:18:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:37.965 19:18:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:37.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:37.965 19:18:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:37.965 19:18:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:37.965 [2024-07-25 19:18:30.272334] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:37.965 [2024-07-25 19:18:30.272373] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:37.965 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.965 [2024-07-25 19:18:30.342124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:37.965 [2024-07-25 19:18:30.416283] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:37.965 [2024-07-25 19:18:30.416322] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:37.965 [2024-07-25 19:18:30.416329] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:37.965 [2024-07-25 19:18:30.416338] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:37.965 [2024-07-25 19:18:30.416343] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:37.965 [2024-07-25 19:18:30.416454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:37.965 [2024-07-25 19:18:30.416488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:37.965 [2024-07-25 19:18:30.416598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:37.965 [2024-07-25 19:18:30.416599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:38.901 19:18:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:38.901 19:18:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:27:38.901 19:18:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:38.901 19:18:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:38.901 19:18:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:38.901 19:18:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:38.901 19:18:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:38.901 19:18:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.901 19:18:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:38.901 Malloc0 00:27:38.901 19:18:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.901 19:18:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:27:38.901 19:18:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.901 19:18:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:38.901 [2024-07-25 19:18:31.197127] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb64f50/0xb70ad0) succeed. 00:27:38.901 [2024-07-25 19:18:31.206775] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb66590/0xbf0b40) succeed. 00:27:38.901 19:18:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.901 19:18:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:38.901 19:18:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.901 19:18:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:38.901 19:18:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.901 19:18:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:38.901 19:18:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.901 19:18:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:38.901 19:18:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.901 19:18:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:38.901 19:18:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.901 19:18:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:38.902 [2024-07-25 19:18:31.351616] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:38.902 19:18:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.902 19:18:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:27:38.902 19:18:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.902 19:18:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:38.902 19:18:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.902 19:18:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=914401 00:27:38.902 19:18:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:38.902 19:18:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:39.161 EAL: No free 2048 kB hugepages reported on node 1 00:27:41.060 19:18:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 914241 00:27:41.060 19:18:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:42.434 Read completed with error (sct=0, sc=8) 00:27:42.434 starting I/O failed 00:27:42.434 Read completed with error (sct=0, sc=8) 00:27:42.434 starting I/O failed 00:27:42.434 Write completed with error (sct=0, sc=8) 00:27:42.434 starting I/O failed 00:27:42.434 Read completed with error (sct=0, sc=8) 00:27:42.434 starting I/O failed 00:27:42.434 Read completed with error (sct=0, sc=8) 00:27:42.434 starting I/O failed 00:27:42.434 Read completed with error (sct=0, sc=8) 00:27:42.434 starting I/O failed 00:27:42.434 Write completed with error (sct=0, sc=8) 00:27:42.434 starting I/O failed 00:27:42.434 Read completed with error (sct=0, sc=8) 00:27:42.434 starting I/O failed 00:27:42.434 Read completed with error (sct=0, sc=8) 00:27:42.434 starting I/O failed 00:27:42.434 Write completed with error (sct=0, sc=8) 00:27:42.434 starting I/O failed 00:27:42.434 Write completed with error (sct=0, sc=8) 00:27:42.434 starting I/O failed 00:27:42.434 Read completed with error (sct=0, sc=8) 00:27:42.434 starting I/O failed 00:27:42.434 Read completed with error (sct=0, sc=8) 00:27:42.434 starting I/O failed 00:27:42.434 Write completed with error (sct=0, sc=8) 00:27:42.434 starting I/O failed 00:27:42.434 Write completed with error (sct=0, sc=8) 00:27:42.434 starting I/O failed 00:27:42.434 Write completed with error (sct=0, sc=8) 00:27:42.434 starting I/O failed 00:27:42.434 Write completed with error (sct=0, sc=8) 00:27:42.434 starting I/O failed 00:27:42.434 Read completed with error (sct=0, sc=8) 00:27:42.434 starting I/O failed 00:27:42.434 Read completed with error (sct=0, sc=8) 00:27:42.434 starting I/O failed 00:27:42.434 Read completed with error (sct=0, sc=8) 00:27:42.434 starting I/O failed 00:27:42.434 Read completed with error (sct=0, sc=8) 00:27:42.434 starting I/O failed 00:27:42.434 Write completed with error (sct=0, sc=8) 00:27:42.434 starting I/O failed 00:27:42.434 Write completed with error (sct=0, sc=8) 00:27:42.434 starting I/O failed 00:27:42.434 Write completed with error (sct=0, sc=8) 00:27:42.434 starting I/O failed 00:27:42.434 Write completed with error (sct=0, sc=8) 00:27:42.434 starting I/O failed 00:27:42.434 Read completed with error (sct=0, sc=8) 00:27:42.434 starting I/O failed 00:27:42.434 Write completed with error (sct=0, sc=8) 00:27:42.434 starting I/O failed 00:27:42.434 Read completed with error (sct=0, sc=8) 00:27:42.434 starting I/O failed 00:27:42.434 Write completed with error (sct=0, sc=8) 00:27:42.434 starting I/O failed 00:27:42.434 Read completed with error (sct=0, sc=8) 00:27:42.434 starting I/O failed 00:27:42.434 Write completed with error (sct=0, sc=8) 00:27:42.434 starting I/O failed 00:27:42.434 Write completed with error (sct=0, sc=8) 00:27:42.434 starting I/O failed 00:27:42.434 [2024-07-25 19:18:34.541713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:42.435 [2024-07-25 19:18:34.543221] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:42.435 [2024-07-25 19:18:34.543267] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:42.435 [2024-07-25 19:18:34.543288] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:43.003 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 914241 Killed "${NVMF_APP[@]}" "$@" 00:27:43.003 19:18:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:27:43.003 19:18:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:43.003 19:18:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:43.003 19:18:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:43.003 19:18:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:43.003 19:18:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=915095 00:27:43.003 19:18:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 915095 00:27:43.003 19:18:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:43.003 19:18:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 915095 ']' 00:27:43.003 19:18:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:43.003 19:18:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:43.003 19:18:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:43.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:43.003 19:18:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:43.003 19:18:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:43.003 [2024-07-25 19:18:35.425871] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:43.003 [2024-07-25 19:18:35.425917] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:43.003 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.262 [2024-07-25 19:18:35.496167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:43.262 [2024-07-25 19:18:35.547347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.262 qpair failed and we were unable to recover it. 00:27:43.262 [2024-07-25 19:18:35.548904] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:43.262 [2024-07-25 19:18:35.548921] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:43.262 [2024-07-25 19:18:35.548928] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:43.262 [2024-07-25 19:18:35.569220] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:43.262 [2024-07-25 19:18:35.569249] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:43.262 [2024-07-25 19:18:35.569260] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:43.262 [2024-07-25 19:18:35.569266] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:43.262 [2024-07-25 19:18:35.569286] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:43.262 [2024-07-25 19:18:35.569395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:43.262 [2024-07-25 19:18:35.569501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:43.262 [2024-07-25 19:18:35.569606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:43.262 [2024-07-25 19:18:35.569607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:43.831 19:18:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:43.831 19:18:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:27:43.831 19:18:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:43.831 19:18:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:43.831 19:18:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:43.831 19:18:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:43.831 19:18:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:43.831 19:18:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.831 19:18:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:44.090 Malloc0 00:27:44.090 19:18:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.090 19:18:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:27:44.090 19:18:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.090 19:18:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:44.090 [2024-07-25 19:18:36.345752] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18bef50/0x18caad0) succeed. 00:27:44.090 [2024-07-25 19:18:36.356584] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18c0590/0x194ab40) succeed. 00:27:44.090 19:18:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.090 19:18:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:44.090 19:18:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.090 19:18:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:44.090 19:18:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.090 19:18:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:44.090 19:18:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.090 19:18:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:44.090 19:18:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.090 19:18:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:44.090 19:18:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.090 19:18:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:44.090 [2024-07-25 19:18:36.504540] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:44.090 19:18:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.090 19:18:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:27:44.090 19:18:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.090 19:18:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:44.090 19:18:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.090 19:18:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 914401 00:27:44.090 [2024-07-25 19:18:36.552853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.090 qpair failed and we were unable to recover it. 00:27:44.350 [2024-07-25 19:18:36.564787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.350 [2024-07-25 19:18:36.564844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.350 [2024-07-25 19:18:36.564862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.350 [2024-07-25 19:18:36.564871] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.350 [2024-07-25 19:18:36.564877] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.350 [2024-07-25 19:18:36.574150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.350 qpair failed and we were unable to recover it. 00:27:44.350 [2024-07-25 19:18:36.584512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.350 [2024-07-25 19:18:36.584552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.350 [2024-07-25 19:18:36.584567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.350 [2024-07-25 19:18:36.584575] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.350 [2024-07-25 19:18:36.584581] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.350 [2024-07-25 19:18:36.594168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.350 qpair failed and we were unable to recover it. 00:27:44.350 [2024-07-25 19:18:36.604507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.350 [2024-07-25 19:18:36.604543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.350 [2024-07-25 19:18:36.604558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.350 [2024-07-25 19:18:36.604565] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.350 [2024-07-25 19:18:36.604575] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.350 [2024-07-25 19:18:36.614298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.350 qpair failed and we were unable to recover it. 00:27:44.350 [2024-07-25 19:18:36.624701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.350 [2024-07-25 19:18:36.624744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.350 [2024-07-25 19:18:36.624759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.350 [2024-07-25 19:18:36.624766] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.350 [2024-07-25 19:18:36.624773] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.350 [2024-07-25 19:18:36.634383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.350 qpair failed and we were unable to recover it. 00:27:44.350 [2024-07-25 19:18:36.644666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.350 [2024-07-25 19:18:36.644706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.350 [2024-07-25 19:18:36.644721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.350 [2024-07-25 19:18:36.644728] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.350 [2024-07-25 19:18:36.644734] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.350 [2024-07-25 19:18:36.654379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.350 qpair failed and we were unable to recover it. 00:27:44.350 [2024-07-25 19:18:36.664794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.350 [2024-07-25 19:18:36.664830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.350 [2024-07-25 19:18:36.664844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.350 [2024-07-25 19:18:36.664851] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.350 [2024-07-25 19:18:36.664858] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.350 [2024-07-25 19:18:36.674512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.350 qpair failed and we were unable to recover it. 00:27:44.350 [2024-07-25 19:18:36.684807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.350 [2024-07-25 19:18:36.684848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.350 [2024-07-25 19:18:36.684862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.350 [2024-07-25 19:18:36.684869] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.350 [2024-07-25 19:18:36.684875] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.351 [2024-07-25 19:18:36.694587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.351 qpair failed and we were unable to recover it. 00:27:44.351 [2024-07-25 19:18:36.704859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.351 [2024-07-25 19:18:36.704907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.351 [2024-07-25 19:18:36.704922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.351 [2024-07-25 19:18:36.704929] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.351 [2024-07-25 19:18:36.704935] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.351 [2024-07-25 19:18:36.714712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.351 qpair failed and we were unable to recover it. 00:27:44.351 [2024-07-25 19:18:36.724937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.351 [2024-07-25 19:18:36.724976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.351 [2024-07-25 19:18:36.724991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.351 [2024-07-25 19:18:36.724998] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.351 [2024-07-25 19:18:36.725004] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.351 [2024-07-25 19:18:36.734617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.351 qpair failed and we were unable to recover it. 00:27:44.351 [2024-07-25 19:18:36.744958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.351 [2024-07-25 19:18:36.744995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.351 [2024-07-25 19:18:36.745009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.351 [2024-07-25 19:18:36.745016] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.351 [2024-07-25 19:18:36.745022] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.351 [2024-07-25 19:18:36.754722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.351 qpair failed and we were unable to recover it. 00:27:44.351 [2024-07-25 19:18:36.765075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.351 [2024-07-25 19:18:36.765112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.351 [2024-07-25 19:18:36.765127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.351 [2024-07-25 19:18:36.765134] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.351 [2024-07-25 19:18:36.765140] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.351 [2024-07-25 19:18:36.774826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.351 qpair failed and we were unable to recover it. 00:27:44.351 [2024-07-25 19:18:36.785111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.351 [2024-07-25 19:18:36.785150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.351 [2024-07-25 19:18:36.785168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.351 [2024-07-25 19:18:36.785175] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.351 [2024-07-25 19:18:36.785181] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.351 [2024-07-25 19:18:36.794774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.351 qpair failed and we were unable to recover it. 00:27:44.351 [2024-07-25 19:18:36.805109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.351 [2024-07-25 19:18:36.805144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.351 [2024-07-25 19:18:36.805158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.351 [2024-07-25 19:18:36.805165] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.351 [2024-07-25 19:18:36.805172] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.351 [2024-07-25 19:18:36.814934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.351 qpair failed and we were unable to recover it. 00:27:44.611 [2024-07-25 19:18:36.825186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.611 [2024-07-25 19:18:36.825228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.611 [2024-07-25 19:18:36.825243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.611 [2024-07-25 19:18:36.825251] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.611 [2024-07-25 19:18:36.825258] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.611 [2024-07-25 19:18:36.835099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.611 qpair failed and we were unable to recover it. 00:27:44.611 [2024-07-25 19:18:36.845227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.611 [2024-07-25 19:18:36.845259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.611 [2024-07-25 19:18:36.845273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.611 [2024-07-25 19:18:36.845280] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.611 [2024-07-25 19:18:36.845286] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.611 [2024-07-25 19:18:36.855088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.611 qpair failed and we were unable to recover it. 00:27:44.611 [2024-07-25 19:18:36.865243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.611 [2024-07-25 19:18:36.865283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.611 [2024-07-25 19:18:36.865297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.611 [2024-07-25 19:18:36.865304] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.611 [2024-07-25 19:18:36.865310] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.611 [2024-07-25 19:18:36.875122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.611 qpair failed and we were unable to recover it. 00:27:44.611 [2024-07-25 19:18:36.885332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.611 [2024-07-25 19:18:36.885367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.611 [2024-07-25 19:18:36.885381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.611 [2024-07-25 19:18:36.885388] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.611 [2024-07-25 19:18:36.885394] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.611 [2024-07-25 19:18:36.895395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.611 qpair failed and we were unable to recover it. 00:27:44.611 [2024-07-25 19:18:36.905527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.611 [2024-07-25 19:18:36.905563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.611 [2024-07-25 19:18:36.905577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.611 [2024-07-25 19:18:36.905583] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.611 [2024-07-25 19:18:36.905590] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.611 [2024-07-25 19:18:36.915358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.611 qpair failed and we were unable to recover it. 00:27:44.611 [2024-07-25 19:18:36.925482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.611 [2024-07-25 19:18:36.925514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.612 [2024-07-25 19:18:36.925528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.612 [2024-07-25 19:18:36.925535] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.612 [2024-07-25 19:18:36.925541] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.612 [2024-07-25 19:18:36.935379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.612 qpair failed and we were unable to recover it. 00:27:44.612 [2024-07-25 19:18:36.945539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.612 [2024-07-25 19:18:36.945577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.612 [2024-07-25 19:18:36.945591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.612 [2024-07-25 19:18:36.945597] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.612 [2024-07-25 19:18:36.945603] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.612 [2024-07-25 19:18:36.955432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.612 qpair failed and we were unable to recover it. 00:27:44.612 [2024-07-25 19:18:36.965594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.612 [2024-07-25 19:18:36.965635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.612 [2024-07-25 19:18:36.965649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.612 [2024-07-25 19:18:36.965656] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.612 [2024-07-25 19:18:36.965662] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.612 [2024-07-25 19:18:36.975335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.612 qpair failed and we were unable to recover it. 00:27:44.612 [2024-07-25 19:18:36.985724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.612 [2024-07-25 19:18:36.985766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.612 [2024-07-25 19:18:36.985779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.612 [2024-07-25 19:18:36.985786] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.612 [2024-07-25 19:18:36.985792] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.612 [2024-07-25 19:18:36.995449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.612 qpair failed and we were unable to recover it. 00:27:44.612 [2024-07-25 19:18:37.005759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.612 [2024-07-25 19:18:37.005798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.612 [2024-07-25 19:18:37.005812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.612 [2024-07-25 19:18:37.005819] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.612 [2024-07-25 19:18:37.005825] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.612 [2024-07-25 19:18:37.015569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.612 qpair failed and we were unable to recover it. 00:27:44.612 [2024-07-25 19:18:37.025796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.612 [2024-07-25 19:18:37.025833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.612 [2024-07-25 19:18:37.025847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.612 [2024-07-25 19:18:37.025853] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.612 [2024-07-25 19:18:37.025860] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.612 [2024-07-25 19:18:37.035616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.612 qpair failed and we were unable to recover it. 00:27:44.612 [2024-07-25 19:18:37.045797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.612 [2024-07-25 19:18:37.045837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.612 [2024-07-25 19:18:37.045851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.612 [2024-07-25 19:18:37.045858] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.612 [2024-07-25 19:18:37.045867] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.612 [2024-07-25 19:18:37.055749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.612 qpair failed and we were unable to recover it. 00:27:44.612 [2024-07-25 19:18:37.065854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.612 [2024-07-25 19:18:37.065896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.612 [2024-07-25 19:18:37.065915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.612 [2024-07-25 19:18:37.065922] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.612 [2024-07-25 19:18:37.065928] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.612 [2024-07-25 19:18:37.075690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.612 qpair failed and we were unable to recover it. 00:27:44.872 [2024-07-25 19:18:37.085891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.872 [2024-07-25 19:18:37.085931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.872 [2024-07-25 19:18:37.085946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.872 [2024-07-25 19:18:37.085953] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.872 [2024-07-25 19:18:37.085960] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.872 [2024-07-25 19:18:37.095747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.872 qpair failed and we were unable to recover it. 00:27:44.872 [2024-07-25 19:18:37.106102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.872 [2024-07-25 19:18:37.106139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.872 [2024-07-25 19:18:37.106153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.872 [2024-07-25 19:18:37.106160] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.872 [2024-07-25 19:18:37.106166] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.872 [2024-07-25 19:18:37.115981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.872 qpair failed and we were unable to recover it. 00:27:44.872 [2024-07-25 19:18:37.126170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.872 [2024-07-25 19:18:37.126212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.872 [2024-07-25 19:18:37.126227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.872 [2024-07-25 19:18:37.126233] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.872 [2024-07-25 19:18:37.126239] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.872 [2024-07-25 19:18:37.135928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.872 qpair failed and we were unable to recover it. 00:27:44.872 [2024-07-25 19:18:37.146151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.872 [2024-07-25 19:18:37.146185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.872 [2024-07-25 19:18:37.146200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.872 [2024-07-25 19:18:37.146207] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.872 [2024-07-25 19:18:37.146212] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.872 [2024-07-25 19:18:37.155930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.872 qpair failed and we were unable to recover it. 00:27:44.872 [2024-07-25 19:18:37.166209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.872 [2024-07-25 19:18:37.166249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.872 [2024-07-25 19:18:37.166263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.872 [2024-07-25 19:18:37.166271] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.872 [2024-07-25 19:18:37.166277] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.872 [2024-07-25 19:18:37.176059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.872 qpair failed and we were unable to recover it. 00:27:44.872 [2024-07-25 19:18:37.186391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.872 [2024-07-25 19:18:37.186430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.872 [2024-07-25 19:18:37.186444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.872 [2024-07-25 19:18:37.186451] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.872 [2024-07-25 19:18:37.186457] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.872 [2024-07-25 19:18:37.196171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.872 qpair failed and we were unable to recover it. 00:27:44.872 [2024-07-25 19:18:37.206387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.873 [2024-07-25 19:18:37.206425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.873 [2024-07-25 19:18:37.206439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.873 [2024-07-25 19:18:37.206446] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.873 [2024-07-25 19:18:37.206452] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.873 [2024-07-25 19:18:37.216136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.873 qpair failed and we were unable to recover it. 00:27:44.873 [2024-07-25 19:18:37.226347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.873 [2024-07-25 19:18:37.226383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.873 [2024-07-25 19:18:37.226400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.873 [2024-07-25 19:18:37.226407] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.873 [2024-07-25 19:18:37.226413] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.873 [2024-07-25 19:18:37.236198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.873 qpair failed and we were unable to recover it. 00:27:44.873 [2024-07-25 19:18:37.246514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.873 [2024-07-25 19:18:37.246555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.873 [2024-07-25 19:18:37.246569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.873 [2024-07-25 19:18:37.246576] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.873 [2024-07-25 19:18:37.246582] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.873 [2024-07-25 19:18:37.256213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.873 qpair failed and we were unable to recover it. 00:27:44.873 [2024-07-25 19:18:37.266584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.873 [2024-07-25 19:18:37.266626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.873 [2024-07-25 19:18:37.266641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.873 [2024-07-25 19:18:37.266648] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.873 [2024-07-25 19:18:37.266653] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.873 [2024-07-25 19:18:37.276417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.873 qpair failed and we were unable to recover it. 00:27:44.873 [2024-07-25 19:18:37.286725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.873 [2024-07-25 19:18:37.286767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.873 [2024-07-25 19:18:37.286781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.873 [2024-07-25 19:18:37.286788] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.873 [2024-07-25 19:18:37.286795] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.873 [2024-07-25 19:18:37.296320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.873 qpair failed and we were unable to recover it. 00:27:44.873 [2024-07-25 19:18:37.306820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.873 [2024-07-25 19:18:37.306854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.873 [2024-07-25 19:18:37.306868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.873 [2024-07-25 19:18:37.306876] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.873 [2024-07-25 19:18:37.306882] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.873 [2024-07-25 19:18:37.316469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.873 qpair failed and we were unable to recover it. 00:27:44.873 [2024-07-25 19:18:37.326785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:44.873 [2024-07-25 19:18:37.326823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:44.873 [2024-07-25 19:18:37.326838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:44.873 [2024-07-25 19:18:37.326844] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:44.873 [2024-07-25 19:18:37.326851] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.873 [2024-07-25 19:18:37.336535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.873 qpair failed and we were unable to recover it. 00:27:45.133 [2024-07-25 19:18:37.346759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.133 [2024-07-25 19:18:37.346799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.133 [2024-07-25 19:18:37.346814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.133 [2024-07-25 19:18:37.346821] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.133 [2024-07-25 19:18:37.346827] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.133 [2024-07-25 19:18:37.356340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-07-25 19:18:37.366824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.133 [2024-07-25 19:18:37.366862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.133 [2024-07-25 19:18:37.366876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.133 [2024-07-25 19:18:37.366883] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.133 [2024-07-25 19:18:37.366889] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.133 [2024-07-25 19:18:37.376628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-07-25 19:18:37.386932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.133 [2024-07-25 19:18:37.386967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.133 [2024-07-25 19:18:37.386981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.133 [2024-07-25 19:18:37.386988] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.133 [2024-07-25 19:18:37.386994] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.133 [2024-07-25 19:18:37.396432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-07-25 19:18:37.406953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.133 [2024-07-25 19:18:37.406999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.133 [2024-07-25 19:18:37.407013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.133 [2024-07-25 19:18:37.407021] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.133 [2024-07-25 19:18:37.407027] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.133 [2024-07-25 19:18:37.416650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-07-25 19:18:37.426976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.133 [2024-07-25 19:18:37.427014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.133 [2024-07-25 19:18:37.427028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.133 [2024-07-25 19:18:37.427035] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.133 [2024-07-25 19:18:37.427041] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.133 [2024-07-25 19:18:37.436773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.133 qpair failed and we were unable to recover it. 00:27:45.133 [2024-07-25 19:18:37.447048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.133 [2024-07-25 19:18:37.447093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.133 [2024-07-25 19:18:37.447107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.134 [2024-07-25 19:18:37.447113] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.134 [2024-07-25 19:18:37.447119] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.134 [2024-07-25 19:18:37.456908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-07-25 19:18:37.467161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.134 [2024-07-25 19:18:37.467199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.134 [2024-07-25 19:18:37.467214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.134 [2024-07-25 19:18:37.467221] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.134 [2024-07-25 19:18:37.467227] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.134 [2024-07-25 19:18:37.476855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-07-25 19:18:37.487320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.134 [2024-07-25 19:18:37.487357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.134 [2024-07-25 19:18:37.487371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.134 [2024-07-25 19:18:37.487378] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.134 [2024-07-25 19:18:37.487387] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.134 [2024-07-25 19:18:37.496921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-07-25 19:18:37.507418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.134 [2024-07-25 19:18:37.507456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.134 [2024-07-25 19:18:37.507470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.134 [2024-07-25 19:18:37.507477] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.134 [2024-07-25 19:18:37.507483] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.134 [2024-07-25 19:18:37.517153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-07-25 19:18:37.527418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.134 [2024-07-25 19:18:37.527454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.134 [2024-07-25 19:18:37.527468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.134 [2024-07-25 19:18:37.527475] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.134 [2024-07-25 19:18:37.527481] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.134 [2024-07-25 19:18:37.537105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-07-25 19:18:37.547500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.134 [2024-07-25 19:18:37.547534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.134 [2024-07-25 19:18:37.547548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.134 [2024-07-25 19:18:37.547554] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.134 [2024-07-25 19:18:37.547561] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.134 [2024-07-25 19:18:37.557228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-07-25 19:18:37.567586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.134 [2024-07-25 19:18:37.567622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.134 [2024-07-25 19:18:37.567636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.134 [2024-07-25 19:18:37.567643] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.134 [2024-07-25 19:18:37.567649] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.134 [2024-07-25 19:18:37.577273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.134 [2024-07-25 19:18:37.587576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.134 [2024-07-25 19:18:37.587615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.134 [2024-07-25 19:18:37.587630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.134 [2024-07-25 19:18:37.587637] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.134 [2024-07-25 19:18:37.587643] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.134 [2024-07-25 19:18:37.597368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.134 qpair failed and we were unable to recover it. 00:27:45.394 [2024-07-25 19:18:37.607564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.394 [2024-07-25 19:18:37.607609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.394 [2024-07-25 19:18:37.607624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.394 [2024-07-25 19:18:37.607632] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.394 [2024-07-25 19:18:37.607638] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.394 [2024-07-25 19:18:37.617411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.394 qpair failed and we were unable to recover it. 00:27:45.394 [2024-07-25 19:18:37.627670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.394 [2024-07-25 19:18:37.627707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.394 [2024-07-25 19:18:37.627721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.394 [2024-07-25 19:18:37.627728] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.394 [2024-07-25 19:18:37.627734] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.394 [2024-07-25 19:18:37.637462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.394 qpair failed and we were unable to recover it. 00:27:45.394 [2024-07-25 19:18:37.647696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.394 [2024-07-25 19:18:37.647735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.394 [2024-07-25 19:18:37.647749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.394 [2024-07-25 19:18:37.647756] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.394 [2024-07-25 19:18:37.647762] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.394 [2024-07-25 19:18:37.657410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.394 qpair failed and we were unable to recover it. 00:27:45.394 [2024-07-25 19:18:37.667783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.394 [2024-07-25 19:18:37.667822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.394 [2024-07-25 19:18:37.667841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.394 [2024-07-25 19:18:37.667848] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.394 [2024-07-25 19:18:37.667854] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.394 [2024-07-25 19:18:37.677443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.394 qpair failed and we were unable to recover it. 00:27:45.394 [2024-07-25 19:18:37.687794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.394 [2024-07-25 19:18:37.687837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.394 [2024-07-25 19:18:37.687851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.394 [2024-07-25 19:18:37.687858] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.394 [2024-07-25 19:18:37.687863] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.394 [2024-07-25 19:18:37.697487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.394 qpair failed and we were unable to recover it. 00:27:45.394 [2024-07-25 19:18:37.707820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.394 [2024-07-25 19:18:37.707862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.394 [2024-07-25 19:18:37.707876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.394 [2024-07-25 19:18:37.707882] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.394 [2024-07-25 19:18:37.707888] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.394 [2024-07-25 19:18:37.717716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.394 qpair failed and we were unable to recover it. 00:27:45.394 [2024-07-25 19:18:37.727984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.394 [2024-07-25 19:18:37.728017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.394 [2024-07-25 19:18:37.728031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.394 [2024-07-25 19:18:37.728038] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.394 [2024-07-25 19:18:37.728044] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.394 [2024-07-25 19:18:37.737725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.394 qpair failed and we were unable to recover it. 00:27:45.394 [2024-07-25 19:18:37.748010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.395 [2024-07-25 19:18:37.748048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.395 [2024-07-25 19:18:37.748062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.395 [2024-07-25 19:18:37.748069] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.395 [2024-07-25 19:18:37.748075] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.395 [2024-07-25 19:18:37.757813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.395 qpair failed and we were unable to recover it. 00:27:45.395 [2024-07-25 19:18:37.768059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.395 [2024-07-25 19:18:37.768099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.395 [2024-07-25 19:18:37.768113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.395 [2024-07-25 19:18:37.768121] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.395 [2024-07-25 19:18:37.768127] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.395 [2024-07-25 19:18:37.777906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.395 qpair failed and we were unable to recover it. 00:27:45.395 [2024-07-25 19:18:37.788093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.395 [2024-07-25 19:18:37.788130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.395 [2024-07-25 19:18:37.788145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.395 [2024-07-25 19:18:37.788151] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.395 [2024-07-25 19:18:37.788158] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.395 [2024-07-25 19:18:37.797957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.395 qpair failed and we were unable to recover it. 00:27:45.395 [2024-07-25 19:18:37.808190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.395 [2024-07-25 19:18:37.808223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.395 [2024-07-25 19:18:37.808237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.395 [2024-07-25 19:18:37.808244] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.395 [2024-07-25 19:18:37.808250] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.395 [2024-07-25 19:18:37.817974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.395 qpair failed and we were unable to recover it. 00:27:45.395 [2024-07-25 19:18:37.828250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.395 [2024-07-25 19:18:37.828289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.395 [2024-07-25 19:18:37.828303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.395 [2024-07-25 19:18:37.828311] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.395 [2024-07-25 19:18:37.828317] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.395 [2024-07-25 19:18:37.838113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.395 qpair failed and we were unable to recover it. 00:27:45.395 [2024-07-25 19:18:37.848403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.395 [2024-07-25 19:18:37.848445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.395 [2024-07-25 19:18:37.848460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.395 [2024-07-25 19:18:37.848467] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.395 [2024-07-25 19:18:37.848473] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.395 [2024-07-25 19:18:37.857904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.395 qpair failed and we were unable to recover it. 00:27:45.655 [2024-07-25 19:18:37.868259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.655 [2024-07-25 19:18:37.868300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.655 [2024-07-25 19:18:37.868315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.655 [2024-07-25 19:18:37.868322] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.655 [2024-07-25 19:18:37.868329] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.655 [2024-07-25 19:18:37.878071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-07-25 19:18:37.888368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.655 [2024-07-25 19:18:37.888406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.655 [2024-07-25 19:18:37.888421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.655 [2024-07-25 19:18:37.888428] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.655 [2024-07-25 19:18:37.888434] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.655 [2024-07-25 19:18:37.898196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-07-25 19:18:37.908391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.655 [2024-07-25 19:18:37.908431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.655 [2024-07-25 19:18:37.908445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.655 [2024-07-25 19:18:37.908451] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.655 [2024-07-25 19:18:37.908457] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.655 [2024-07-25 19:18:37.918171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-07-25 19:18:37.928491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.655 [2024-07-25 19:18:37.928530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.655 [2024-07-25 19:18:37.928544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.655 [2024-07-25 19:18:37.928551] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.655 [2024-07-25 19:18:37.928564] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.655 [2024-07-25 19:18:37.938164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-07-25 19:18:37.948568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.655 [2024-07-25 19:18:37.948608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.655 [2024-07-25 19:18:37.948623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.655 [2024-07-25 19:18:37.948629] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.655 [2024-07-25 19:18:37.948636] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.655 [2024-07-25 19:18:37.958277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-07-25 19:18:37.968612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.655 [2024-07-25 19:18:37.968648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.655 [2024-07-25 19:18:37.968662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.655 [2024-07-25 19:18:37.968669] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.655 [2024-07-25 19:18:37.968675] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.655 [2024-07-25 19:18:37.978430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-07-25 19:18:37.988743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.655 [2024-07-25 19:18:37.988781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.655 [2024-07-25 19:18:37.988795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.655 [2024-07-25 19:18:37.988802] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.655 [2024-07-25 19:18:37.988808] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.655 [2024-07-25 19:18:37.998465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-07-25 19:18:38.008774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.655 [2024-07-25 19:18:38.008811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.655 [2024-07-25 19:18:38.008826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.655 [2024-07-25 19:18:38.008832] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.655 [2024-07-25 19:18:38.008838] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.655 [2024-07-25 19:18:38.018498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-07-25 19:18:38.028781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.655 [2024-07-25 19:18:38.028821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.655 [2024-07-25 19:18:38.028835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.655 [2024-07-25 19:18:38.028842] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.655 [2024-07-25 19:18:38.028848] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.655 [2024-07-25 19:18:38.038577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-07-25 19:18:38.048922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.655 [2024-07-25 19:18:38.048961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.655 [2024-07-25 19:18:38.048974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.655 [2024-07-25 19:18:38.048981] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.655 [2024-07-25 19:18:38.048987] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.655 [2024-07-25 19:18:38.058566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.655 qpair failed and we were unable to recover it. 00:27:45.655 [2024-07-25 19:18:38.068988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.655 [2024-07-25 19:18:38.069026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.655 [2024-07-25 19:18:38.069040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.656 [2024-07-25 19:18:38.069047] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.656 [2024-07-25 19:18:38.069053] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.656 [2024-07-25 19:18:38.078675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-07-25 19:18:38.088959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.656 [2024-07-25 19:18:38.089001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.656 [2024-07-25 19:18:38.089015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.656 [2024-07-25 19:18:38.089022] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.656 [2024-07-25 19:18:38.089029] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.656 [2024-07-25 19:18:38.098745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.656 [2024-07-25 19:18:38.109022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.656 [2024-07-25 19:18:38.109062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.656 [2024-07-25 19:18:38.109079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.656 [2024-07-25 19:18:38.109086] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.656 [2024-07-25 19:18:38.109092] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.656 [2024-07-25 19:18:38.118688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.656 qpair failed and we were unable to recover it. 00:27:45.915 [2024-07-25 19:18:38.129046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.915 [2024-07-25 19:18:38.129088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.915 [2024-07-25 19:18:38.129103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.915 [2024-07-25 19:18:38.129110] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.915 [2024-07-25 19:18:38.129115] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.915 [2024-07-25 19:18:38.138829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.915 qpair failed and we were unable to recover it. 00:27:45.915 [2024-07-25 19:18:38.149014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.915 [2024-07-25 19:18:38.149052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.915 [2024-07-25 19:18:38.149065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.915 [2024-07-25 19:18:38.149072] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.915 [2024-07-25 19:18:38.149078] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.915 [2024-07-25 19:18:38.158965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.915 qpair failed and we were unable to recover it. 00:27:45.915 [2024-07-25 19:18:38.169102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.915 [2024-07-25 19:18:38.169146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.915 [2024-07-25 19:18:38.169160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.915 [2024-07-25 19:18:38.169167] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.915 [2024-07-25 19:18:38.169173] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.915 [2024-07-25 19:18:38.179011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.915 qpair failed and we were unable to recover it. 00:27:45.915 [2024-07-25 19:18:38.189276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.915 [2024-07-25 19:18:38.189314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.915 [2024-07-25 19:18:38.189327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.915 [2024-07-25 19:18:38.189334] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.915 [2024-07-25 19:18:38.189340] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.915 [2024-07-25 19:18:38.199040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.915 qpair failed and we were unable to recover it. 00:27:45.915 [2024-07-25 19:18:38.209259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.916 [2024-07-25 19:18:38.209293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.916 [2024-07-25 19:18:38.209307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.916 [2024-07-25 19:18:38.209314] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.916 [2024-07-25 19:18:38.209320] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.916 [2024-07-25 19:18:38.219007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.916 qpair failed and we were unable to recover it. 00:27:45.916 [2024-07-25 19:18:38.229304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.916 [2024-07-25 19:18:38.229343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.916 [2024-07-25 19:18:38.229357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.916 [2024-07-25 19:18:38.229363] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.916 [2024-07-25 19:18:38.229369] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.916 [2024-07-25 19:18:38.239121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.916 qpair failed and we were unable to recover it. 00:27:45.916 [2024-07-25 19:18:38.249430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.916 [2024-07-25 19:18:38.249465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.916 [2024-07-25 19:18:38.249480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.916 [2024-07-25 19:18:38.249486] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.916 [2024-07-25 19:18:38.249492] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.916 [2024-07-25 19:18:38.259104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.916 qpair failed and we were unable to recover it. 00:27:45.916 [2024-07-25 19:18:38.269357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.916 [2024-07-25 19:18:38.269398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.916 [2024-07-25 19:18:38.269412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.916 [2024-07-25 19:18:38.269419] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.916 [2024-07-25 19:18:38.269425] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.916 [2024-07-25 19:18:38.279203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.916 qpair failed and we were unable to recover it. 00:27:45.916 [2024-07-25 19:18:38.289435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.916 [2024-07-25 19:18:38.289477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.916 [2024-07-25 19:18:38.289492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.916 [2024-07-25 19:18:38.289498] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.916 [2024-07-25 19:18:38.289505] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.916 [2024-07-25 19:18:38.299252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.916 qpair failed and we were unable to recover it. 00:27:45.916 [2024-07-25 19:18:38.309592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.916 [2024-07-25 19:18:38.309632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.916 [2024-07-25 19:18:38.309647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.916 [2024-07-25 19:18:38.309654] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.916 [2024-07-25 19:18:38.309660] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.916 [2024-07-25 19:18:38.319306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.916 qpair failed and we were unable to recover it. 00:27:45.916 [2024-07-25 19:18:38.329694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.916 [2024-07-25 19:18:38.329729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.916 [2024-07-25 19:18:38.329743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.916 [2024-07-25 19:18:38.329750] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.916 [2024-07-25 19:18:38.329756] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.916 [2024-07-25 19:18:38.339380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.916 qpair failed and we were unable to recover it. 00:27:45.916 [2024-07-25 19:18:38.349699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.916 [2024-07-25 19:18:38.349735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.916 [2024-07-25 19:18:38.349748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.916 [2024-07-25 19:18:38.349755] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.916 [2024-07-25 19:18:38.349761] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.916 [2024-07-25 19:18:38.359466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.916 qpair failed and we were unable to recover it. 00:27:45.916 [2024-07-25 19:18:38.369786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:45.916 [2024-07-25 19:18:38.369828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:45.916 [2024-07-25 19:18:38.369842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:45.916 [2024-07-25 19:18:38.369849] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:45.916 [2024-07-25 19:18:38.369858] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:45.916 [2024-07-25 19:18:38.379481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.916 qpair failed and we were unable to recover it. 00:27:46.176 [2024-07-25 19:18:38.389685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.176 [2024-07-25 19:18:38.389724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.176 [2024-07-25 19:18:38.389740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.176 [2024-07-25 19:18:38.389747] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.176 [2024-07-25 19:18:38.389753] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.176 [2024-07-25 19:18:38.399529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.176 qpair failed and we were unable to recover it. 00:27:46.176 [2024-07-25 19:18:38.409843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.176 [2024-07-25 19:18:38.409880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.176 [2024-07-25 19:18:38.409894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.176 [2024-07-25 19:18:38.409911] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.176 [2024-07-25 19:18:38.409918] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.176 [2024-07-25 19:18:38.419837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.176 qpair failed and we were unable to recover it. 00:27:46.176 [2024-07-25 19:18:38.429892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.176 [2024-07-25 19:18:38.429935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.176 [2024-07-25 19:18:38.429949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.176 [2024-07-25 19:18:38.429956] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.176 [2024-07-25 19:18:38.429963] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.176 [2024-07-25 19:18:38.439692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.176 qpair failed and we were unable to recover it. 00:27:46.176 [2024-07-25 19:18:38.449977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.176 [2024-07-25 19:18:38.450014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.176 [2024-07-25 19:18:38.450028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.176 [2024-07-25 19:18:38.450035] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.176 [2024-07-25 19:18:38.450041] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.176 [2024-07-25 19:18:38.459743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.176 qpair failed and we were unable to recover it. 00:27:46.176 [2024-07-25 19:18:38.470082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.176 [2024-07-25 19:18:38.470122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.176 [2024-07-25 19:18:38.470136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.176 [2024-07-25 19:18:38.470143] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.176 [2024-07-25 19:18:38.470149] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.176 [2024-07-25 19:18:38.479784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.176 qpair failed and we were unable to recover it. 00:27:46.176 [2024-07-25 19:18:38.490098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.176 [2024-07-25 19:18:38.490143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.176 [2024-07-25 19:18:38.490157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.176 [2024-07-25 19:18:38.490164] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.176 [2024-07-25 19:18:38.490171] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.176 [2024-07-25 19:18:38.499800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.176 qpair failed and we were unable to recover it. 00:27:46.176 [2024-07-25 19:18:38.510134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.176 [2024-07-25 19:18:38.510175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.176 [2024-07-25 19:18:38.510189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.176 [2024-07-25 19:18:38.510196] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.176 [2024-07-25 19:18:38.510202] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.176 [2024-07-25 19:18:38.519945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.176 qpair failed and we were unable to recover it. 00:27:46.176 [2024-07-25 19:18:38.530152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.176 [2024-07-25 19:18:38.530188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.176 [2024-07-25 19:18:38.530202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.176 [2024-07-25 19:18:38.530209] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.176 [2024-07-25 19:18:38.530216] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.176 [2024-07-25 19:18:38.540016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.176 qpair failed and we were unable to recover it. 00:27:46.176 [2024-07-25 19:18:38.550178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.176 [2024-07-25 19:18:38.550217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.176 [2024-07-25 19:18:38.550235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.176 [2024-07-25 19:18:38.550242] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.176 [2024-07-25 19:18:38.550248] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.176 [2024-07-25 19:18:38.560003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.176 qpair failed and we were unable to recover it. 00:27:46.176 [2024-07-25 19:18:38.570363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.176 [2024-07-25 19:18:38.570403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.176 [2024-07-25 19:18:38.570418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.176 [2024-07-25 19:18:38.570424] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.176 [2024-07-25 19:18:38.570431] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.176 [2024-07-25 19:18:38.579971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.176 qpair failed and we were unable to recover it. 00:27:46.176 [2024-07-25 19:18:38.590369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.176 [2024-07-25 19:18:38.590403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.176 [2024-07-25 19:18:38.590417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.176 [2024-07-25 19:18:38.590424] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.176 [2024-07-25 19:18:38.590430] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.176 [2024-07-25 19:18:38.600127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.176 qpair failed and we were unable to recover it. 00:27:46.176 [2024-07-25 19:18:38.610457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.176 [2024-07-25 19:18:38.610498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.177 [2024-07-25 19:18:38.610513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.177 [2024-07-25 19:18:38.610520] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.177 [2024-07-25 19:18:38.610526] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.177 [2024-07-25 19:18:38.620175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.177 qpair failed and we were unable to recover it. 00:27:46.177 [2024-07-25 19:18:38.630434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.177 [2024-07-25 19:18:38.630476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.177 [2024-07-25 19:18:38.630490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.177 [2024-07-25 19:18:38.630497] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.177 [2024-07-25 19:18:38.630503] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.177 [2024-07-25 19:18:38.640245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.177 qpair failed and we were unable to recover it. 00:27:46.436 [2024-07-25 19:18:38.650474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.436 [2024-07-25 19:18:38.650512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.436 [2024-07-25 19:18:38.650526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.436 [2024-07-25 19:18:38.650533] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.436 [2024-07-25 19:18:38.650539] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.436 [2024-07-25 19:18:38.660319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.436 qpair failed and we were unable to recover it. 00:27:46.436 [2024-07-25 19:18:38.670716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.437 [2024-07-25 19:18:38.670754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.437 [2024-07-25 19:18:38.670768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.437 [2024-07-25 19:18:38.670775] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.437 [2024-07-25 19:18:38.670781] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.437 [2024-07-25 19:18:38.680494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-07-25 19:18:38.690600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.437 [2024-07-25 19:18:38.690633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.437 [2024-07-25 19:18:38.690647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.437 [2024-07-25 19:18:38.690653] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.437 [2024-07-25 19:18:38.690660] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.437 [2024-07-25 19:18:38.700332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-07-25 19:18:38.710787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.437 [2024-07-25 19:18:38.710825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.437 [2024-07-25 19:18:38.710839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.437 [2024-07-25 19:18:38.710846] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.437 [2024-07-25 19:18:38.710852] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.437 [2024-07-25 19:18:38.720534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-07-25 19:18:38.730799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.437 [2024-07-25 19:18:38.730841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.437 [2024-07-25 19:18:38.730855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.437 [2024-07-25 19:18:38.730862] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.437 [2024-07-25 19:18:38.730868] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.437 [2024-07-25 19:18:38.740668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-07-25 19:18:38.750880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.437 [2024-07-25 19:18:38.750922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.437 [2024-07-25 19:18:38.750936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.437 [2024-07-25 19:18:38.750943] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.437 [2024-07-25 19:18:38.750949] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.437 [2024-07-25 19:18:38.760573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-07-25 19:18:38.771004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.437 [2024-07-25 19:18:38.771042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.437 [2024-07-25 19:18:38.771057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.437 [2024-07-25 19:18:38.771064] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.437 [2024-07-25 19:18:38.771070] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.437 [2024-07-25 19:18:38.780709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-07-25 19:18:38.791058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.437 [2024-07-25 19:18:38.791099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.437 [2024-07-25 19:18:38.791114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.437 [2024-07-25 19:18:38.791120] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.437 [2024-07-25 19:18:38.791127] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.437 [2024-07-25 19:18:38.800793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-07-25 19:18:38.810969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.437 [2024-07-25 19:18:38.811011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.437 [2024-07-25 19:18:38.811025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.437 [2024-07-25 19:18:38.811032] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.437 [2024-07-25 19:18:38.811041] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.437 [2024-07-25 19:18:38.820709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-07-25 19:18:38.831020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.437 [2024-07-25 19:18:38.831055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.437 [2024-07-25 19:18:38.831068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.437 [2024-07-25 19:18:38.831075] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.437 [2024-07-25 19:18:38.831081] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.437 [2024-07-25 19:18:38.840802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-07-25 19:18:38.851082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.437 [2024-07-25 19:18:38.851119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.437 [2024-07-25 19:18:38.851133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.437 [2024-07-25 19:18:38.851140] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.437 [2024-07-25 19:18:38.851146] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.437 [2024-07-25 19:18:38.860952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-07-25 19:18:38.871416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.437 [2024-07-25 19:18:38.871453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.437 [2024-07-25 19:18:38.871467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.437 [2024-07-25 19:18:38.871474] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.437 [2024-07-25 19:18:38.871480] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.437 [2024-07-25 19:18:38.881015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.437 [2024-07-25 19:18:38.891395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.437 [2024-07-25 19:18:38.891435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.437 [2024-07-25 19:18:38.891449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.437 [2024-07-25 19:18:38.891456] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.437 [2024-07-25 19:18:38.891462] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.437 [2024-07-25 19:18:38.900968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.437 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-25 19:18:38.911454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.697 [2024-07-25 19:18:38.911491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.697 [2024-07-25 19:18:38.911505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.697 [2024-07-25 19:18:38.911512] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.697 [2024-07-25 19:18:38.911518] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.697 [2024-07-25 19:18:38.921234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.697 qpair failed and we were unable to recover it. 00:27:46.697 [2024-07-25 19:18:38.931449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.698 [2024-07-25 19:18:38.931487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.698 [2024-07-25 19:18:38.931501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.698 [2024-07-25 19:18:38.931508] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.698 [2024-07-25 19:18:38.931515] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.698 [2024-07-25 19:18:38.941091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-25 19:18:38.951525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.698 [2024-07-25 19:18:38.951563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.698 [2024-07-25 19:18:38.951577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.698 [2024-07-25 19:18:38.951584] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.698 [2024-07-25 19:18:38.951590] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.698 [2024-07-25 19:18:38.961216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-25 19:18:38.971577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.698 [2024-07-25 19:18:38.971621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.698 [2024-07-25 19:18:38.971635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.698 [2024-07-25 19:18:38.971642] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.698 [2024-07-25 19:18:38.971648] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.698 [2024-07-25 19:18:38.981421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-25 19:18:38.991664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.698 [2024-07-25 19:18:38.991705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.698 [2024-07-25 19:18:38.991722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.698 [2024-07-25 19:18:38.991729] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.698 [2024-07-25 19:18:38.991735] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.698 [2024-07-25 19:18:39.001430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-25 19:18:39.011677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.698 [2024-07-25 19:18:39.011716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.698 [2024-07-25 19:18:39.011731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.698 [2024-07-25 19:18:39.011738] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.698 [2024-07-25 19:18:39.011744] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.698 [2024-07-25 19:18:39.021283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-25 19:18:39.031782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.698 [2024-07-25 19:18:39.031820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.698 [2024-07-25 19:18:39.031834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.698 [2024-07-25 19:18:39.031841] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.698 [2024-07-25 19:18:39.031847] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.698 [2024-07-25 19:18:39.041499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-25 19:18:39.051862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.698 [2024-07-25 19:18:39.051904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.698 [2024-07-25 19:18:39.051918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.698 [2024-07-25 19:18:39.051925] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.698 [2024-07-25 19:18:39.051932] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.698 [2024-07-25 19:18:39.061418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-25 19:18:39.071777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.698 [2024-07-25 19:18:39.071809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.698 [2024-07-25 19:18:39.071823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.698 [2024-07-25 19:18:39.071830] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.698 [2024-07-25 19:18:39.071836] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.698 [2024-07-25 19:18:39.081393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-25 19:18:39.092045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.698 [2024-07-25 19:18:39.092084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.698 [2024-07-25 19:18:39.092099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.698 [2024-07-25 19:18:39.092106] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.698 [2024-07-25 19:18:39.092112] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.698 [2024-07-25 19:18:39.101621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-25 19:18:39.112009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.698 [2024-07-25 19:18:39.112048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.698 [2024-07-25 19:18:39.112062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.698 [2024-07-25 19:18:39.112069] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.698 [2024-07-25 19:18:39.112075] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.698 [2024-07-25 19:18:39.121763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-25 19:18:39.131996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.698 [2024-07-25 19:18:39.132037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.698 [2024-07-25 19:18:39.132051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.698 [2024-07-25 19:18:39.132058] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.698 [2024-07-25 19:18:39.132064] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.698 [2024-07-25 19:18:39.141860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.698 [2024-07-25 19:18:39.152097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.698 [2024-07-25 19:18:39.152131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.698 [2024-07-25 19:18:39.152145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.698 [2024-07-25 19:18:39.152152] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.698 [2024-07-25 19:18:39.152158] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.698 [2024-07-25 19:18:39.161958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.698 qpair failed and we were unable to recover it. 00:27:46.958 [2024-07-25 19:18:39.172128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.958 [2024-07-25 19:18:39.172178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.958 [2024-07-25 19:18:39.172193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.958 [2024-07-25 19:18:39.172199] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.958 [2024-07-25 19:18:39.172205] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.958 [2024-07-25 19:18:39.182016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.958 qpair failed and we were unable to recover it. 00:27:46.958 [2024-07-25 19:18:39.192388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.958 [2024-07-25 19:18:39.192428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.958 [2024-07-25 19:18:39.192441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.958 [2024-07-25 19:18:39.192448] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.958 [2024-07-25 19:18:39.192454] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.958 [2024-07-25 19:18:39.202130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.958 qpair failed and we were unable to recover it. 00:27:46.958 [2024-07-25 19:18:39.212330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.958 [2024-07-25 19:18:39.212369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.958 [2024-07-25 19:18:39.212384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.958 [2024-07-25 19:18:39.212391] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.958 [2024-07-25 19:18:39.212397] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.958 [2024-07-25 19:18:39.222299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.958 qpair failed and we were unable to recover it. 00:27:46.958 [2024-07-25 19:18:39.232397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.958 [2024-07-25 19:18:39.232434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.958 [2024-07-25 19:18:39.232447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.958 [2024-07-25 19:18:39.232454] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.958 [2024-07-25 19:18:39.232460] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.958 [2024-07-25 19:18:39.242066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.958 qpair failed and we were unable to recover it. 00:27:46.958 [2024-07-25 19:18:39.252471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.958 [2024-07-25 19:18:39.252506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.958 [2024-07-25 19:18:39.252520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.958 [2024-07-25 19:18:39.252527] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.958 [2024-07-25 19:18:39.252537] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.958 [2024-07-25 19:18:39.262234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.959 qpair failed and we were unable to recover it. 00:27:46.959 [2024-07-25 19:18:39.272437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.959 [2024-07-25 19:18:39.272476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.959 [2024-07-25 19:18:39.272490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.959 [2024-07-25 19:18:39.272498] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.959 [2024-07-25 19:18:39.272504] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.959 [2024-07-25 19:18:39.282361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.959 qpair failed and we were unable to recover it. 00:27:46.959 [2024-07-25 19:18:39.292646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.959 [2024-07-25 19:18:39.292683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.959 [2024-07-25 19:18:39.292697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.959 [2024-07-25 19:18:39.292705] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.959 [2024-07-25 19:18:39.292711] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.959 [2024-07-25 19:18:39.302318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.959 qpair failed and we were unable to recover it. 00:27:46.959 [2024-07-25 19:18:39.312564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.959 [2024-07-25 19:18:39.312605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.959 [2024-07-25 19:18:39.312620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.959 [2024-07-25 19:18:39.312626] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.959 [2024-07-25 19:18:39.312633] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.959 [2024-07-25 19:18:39.322416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.959 qpair failed and we were unable to recover it. 00:27:46.959 [2024-07-25 19:18:39.332734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.959 [2024-07-25 19:18:39.332772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.959 [2024-07-25 19:18:39.332786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.959 [2024-07-25 19:18:39.332793] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.959 [2024-07-25 19:18:39.332799] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.959 [2024-07-25 19:18:39.342450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.959 qpair failed and we were unable to recover it. 00:27:46.959 [2024-07-25 19:18:39.352687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.959 [2024-07-25 19:18:39.352724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.959 [2024-07-25 19:18:39.352738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.959 [2024-07-25 19:18:39.352745] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.959 [2024-07-25 19:18:39.352751] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.959 [2024-07-25 19:18:39.362412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.959 qpair failed and we were unable to recover it. 00:27:46.959 [2024-07-25 19:18:39.372748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.959 [2024-07-25 19:18:39.372795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.959 [2024-07-25 19:18:39.372809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.959 [2024-07-25 19:18:39.372816] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.959 [2024-07-25 19:18:39.372822] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.959 [2024-07-25 19:18:39.382526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.959 qpair failed and we were unable to recover it. 00:27:46.959 [2024-07-25 19:18:39.392753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.959 [2024-07-25 19:18:39.392788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.959 [2024-07-25 19:18:39.392801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.959 [2024-07-25 19:18:39.392808] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.959 [2024-07-25 19:18:39.392814] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.959 [2024-07-25 19:18:39.402585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.959 qpair failed and we were unable to recover it. 00:27:46.959 [2024-07-25 19:18:39.412798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:46.959 [2024-07-25 19:18:39.412831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:46.959 [2024-07-25 19:18:39.412845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:46.959 [2024-07-25 19:18:39.412851] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:46.959 [2024-07-25 19:18:39.412857] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.959 [2024-07-25 19:18:39.422691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.959 qpair failed and we were unable to recover it. 00:27:47.218 [2024-07-25 19:18:39.432878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.218 [2024-07-25 19:18:39.432921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.218 [2024-07-25 19:18:39.432939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.218 [2024-07-25 19:18:39.432946] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.218 [2024-07-25 19:18:39.432952] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.218 [2024-07-25 19:18:39.442737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.218 qpair failed and we were unable to recover it. 00:27:47.218 [2024-07-25 19:18:39.453042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.218 [2024-07-25 19:18:39.453079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.218 [2024-07-25 19:18:39.453095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.218 [2024-07-25 19:18:39.453102] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.219 [2024-07-25 19:18:39.453108] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.219 [2024-07-25 19:18:39.462686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-07-25 19:18:39.473094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.219 [2024-07-25 19:18:39.473131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.219 [2024-07-25 19:18:39.473144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.219 [2024-07-25 19:18:39.473152] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.219 [2024-07-25 19:18:39.473157] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.219 [2024-07-25 19:18:39.482801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-07-25 19:18:39.493134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.219 [2024-07-25 19:18:39.493173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.219 [2024-07-25 19:18:39.493187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.219 [2024-07-25 19:18:39.493194] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.219 [2024-07-25 19:18:39.493200] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.219 [2024-07-25 19:18:39.502880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-07-25 19:18:39.513201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.219 [2024-07-25 19:18:39.513239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.219 [2024-07-25 19:18:39.513253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.219 [2024-07-25 19:18:39.513259] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.219 [2024-07-25 19:18:39.513265] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.219 [2024-07-25 19:18:39.523014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-07-25 19:18:39.533197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.219 [2024-07-25 19:18:39.533237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.219 [2024-07-25 19:18:39.533251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.219 [2024-07-25 19:18:39.533258] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.219 [2024-07-25 19:18:39.533264] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.219 [2024-07-25 19:18:39.542967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-07-25 19:18:39.553320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.219 [2024-07-25 19:18:39.553361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.219 [2024-07-25 19:18:39.553375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.219 [2024-07-25 19:18:39.553382] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.219 [2024-07-25 19:18:39.553388] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.219 [2024-07-25 19:18:39.563015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-07-25 19:18:39.573385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.219 [2024-07-25 19:18:39.573424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.219 [2024-07-25 19:18:39.573438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.219 [2024-07-25 19:18:39.573446] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.219 [2024-07-25 19:18:39.573452] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.219 [2024-07-25 19:18:39.582932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-07-25 19:18:39.593388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.219 [2024-07-25 19:18:39.593426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.219 [2024-07-25 19:18:39.593440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.219 [2024-07-25 19:18:39.593447] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.219 [2024-07-25 19:18:39.593453] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.219 [2024-07-25 19:18:39.603029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-07-25 19:18:39.613497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.219 [2024-07-25 19:18:39.613543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.219 [2024-07-25 19:18:39.613557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.219 [2024-07-25 19:18:39.613564] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.219 [2024-07-25 19:18:39.613570] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.219 [2024-07-25 19:18:39.623338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-07-25 19:18:39.633542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.219 [2024-07-25 19:18:39.633587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.219 [2024-07-25 19:18:39.633601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.219 [2024-07-25 19:18:39.633608] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.219 [2024-07-25 19:18:39.633613] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.219 [2024-07-25 19:18:39.643296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-07-25 19:18:39.653609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.219 [2024-07-25 19:18:39.653645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.219 [2024-07-25 19:18:39.653659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.219 [2024-07-25 19:18:39.653666] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.219 [2024-07-25 19:18:39.653672] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.219 [2024-07-25 19:18:39.663484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.219 [2024-07-25 19:18:39.673729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.219 [2024-07-25 19:18:39.673767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.219 [2024-07-25 19:18:39.673780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.219 [2024-07-25 19:18:39.673787] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.219 [2024-07-25 19:18:39.673793] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.219 [2024-07-25 19:18:39.683461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.219 qpair failed and we were unable to recover it. 00:27:47.479 [2024-07-25 19:18:39.693683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.479 [2024-07-25 19:18:39.693727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.479 [2024-07-25 19:18:39.693741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.479 [2024-07-25 19:18:39.693748] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.479 [2024-07-25 19:18:39.693758] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.479 [2024-07-25 19:18:39.703522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.479 qpair failed and we were unable to recover it. 00:27:47.479 [2024-07-25 19:18:39.713865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.479 [2024-07-25 19:18:39.713908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.479 [2024-07-25 19:18:39.713922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.479 [2024-07-25 19:18:39.713929] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.479 [2024-07-25 19:18:39.713935] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.479 [2024-07-25 19:18:39.723619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.479 qpair failed and we were unable to recover it. 00:27:47.479 [2024-07-25 19:18:39.733981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.479 [2024-07-25 19:18:39.734018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.479 [2024-07-25 19:18:39.734032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.479 [2024-07-25 19:18:39.734039] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.479 [2024-07-25 19:18:39.734045] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.479 [2024-07-25 19:18:39.743754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.479 qpair failed and we were unable to recover it. 00:27:47.479 [2024-07-25 19:18:39.753873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.479 [2024-07-25 19:18:39.753916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.479 [2024-07-25 19:18:39.753931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.479 [2024-07-25 19:18:39.753938] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.479 [2024-07-25 19:18:39.753944] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.479 [2024-07-25 19:18:39.763858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.479 qpair failed and we were unable to recover it. 00:27:47.479 [2024-07-25 19:18:39.773972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.479 [2024-07-25 19:18:39.774015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.479 [2024-07-25 19:18:39.774029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.479 [2024-07-25 19:18:39.774036] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.479 [2024-07-25 19:18:39.774042] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.479 [2024-07-25 19:18:39.783788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.479 qpair failed and we were unable to recover it. 00:27:47.479 [2024-07-25 19:18:39.794012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.479 [2024-07-25 19:18:39.794053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.479 [2024-07-25 19:18:39.794067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.479 [2024-07-25 19:18:39.794074] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.479 [2024-07-25 19:18:39.794080] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.479 [2024-07-25 19:18:39.803765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.479 qpair failed and we were unable to recover it. 00:27:47.479 [2024-07-25 19:18:39.814017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.479 [2024-07-25 19:18:39.814058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.479 [2024-07-25 19:18:39.814072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.479 [2024-07-25 19:18:39.814079] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.479 [2024-07-25 19:18:39.814085] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.479 [2024-07-25 19:18:39.824122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.479 qpair failed and we were unable to recover it. 00:27:47.479 [2024-07-25 19:18:39.834039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.479 [2024-07-25 19:18:39.834077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.479 [2024-07-25 19:18:39.834091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.479 [2024-07-25 19:18:39.834098] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.479 [2024-07-25 19:18:39.834104] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.479 [2024-07-25 19:18:39.843741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.480 qpair failed and we were unable to recover it. 00:27:47.480 [2024-07-25 19:18:39.854393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.480 [2024-07-25 19:18:39.854439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.480 [2024-07-25 19:18:39.854453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.480 [2024-07-25 19:18:39.854460] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.480 [2024-07-25 19:18:39.854466] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.480 [2024-07-25 19:18:39.863859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.480 qpair failed and we were unable to recover it. 00:27:47.480 [2024-07-25 19:18:39.874320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.480 [2024-07-25 19:18:39.874361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.480 [2024-07-25 19:18:39.874378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.480 [2024-07-25 19:18:39.874385] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.480 [2024-07-25 19:18:39.874391] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.480 [2024-07-25 19:18:39.884093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.480 qpair failed and we were unable to recover it. 00:27:47.480 [2024-07-25 19:18:39.894385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.480 [2024-07-25 19:18:39.894418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.480 [2024-07-25 19:18:39.894432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.480 [2024-07-25 19:18:39.894439] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.480 [2024-07-25 19:18:39.894445] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.480 [2024-07-25 19:18:39.904115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.480 qpair failed and we were unable to recover it. 00:27:47.480 [2024-07-25 19:18:39.914348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.480 [2024-07-25 19:18:39.914387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.480 [2024-07-25 19:18:39.914402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.480 [2024-07-25 19:18:39.914408] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.480 [2024-07-25 19:18:39.914415] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.480 [2024-07-25 19:18:39.924097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.480 qpair failed and we were unable to recover it. 00:27:47.480 [2024-07-25 19:18:39.934432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.480 [2024-07-25 19:18:39.934476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.480 [2024-07-25 19:18:39.934490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.480 [2024-07-25 19:18:39.934496] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.480 [2024-07-25 19:18:39.934503] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.480 [2024-07-25 19:18:39.944169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.480 qpair failed and we were unable to recover it. 00:27:47.739 [2024-07-25 19:18:39.954495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.739 [2024-07-25 19:18:39.954536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.739 [2024-07-25 19:18:39.954550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.739 [2024-07-25 19:18:39.954558] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.739 [2024-07-25 19:18:39.954564] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.739 [2024-07-25 19:18:39.964210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.739 qpair failed and we were unable to recover it. 00:27:47.739 [2024-07-25 19:18:39.974569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.739 [2024-07-25 19:18:39.974606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.739 [2024-07-25 19:18:39.974620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.739 [2024-07-25 19:18:39.974627] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.739 [2024-07-25 19:18:39.974633] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.739 [2024-07-25 19:18:39.984325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.739 qpair failed and we were unable to recover it. 00:27:47.739 [2024-07-25 19:18:39.994611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.739 [2024-07-25 19:18:39.994650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.739 [2024-07-25 19:18:39.994665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.739 [2024-07-25 19:18:39.994671] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.739 [2024-07-25 19:18:39.994677] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.739 [2024-07-25 19:18:40.004513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.740 qpair failed and we were unable to recover it. 00:27:47.740 [2024-07-25 19:18:40.014657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.740 [2024-07-25 19:18:40.014702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.740 [2024-07-25 19:18:40.014717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.740 [2024-07-25 19:18:40.014725] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.740 [2024-07-25 19:18:40.014731] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.740 [2024-07-25 19:18:40.024534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.740 qpair failed and we were unable to recover it. 00:27:47.740 [2024-07-25 19:18:40.034681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.740 [2024-07-25 19:18:40.034725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.740 [2024-07-25 19:18:40.034741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.740 [2024-07-25 19:18:40.034748] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.740 [2024-07-25 19:18:40.034755] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.740 [2024-07-25 19:18:40.044521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.740 qpair failed and we were unable to recover it. 00:27:47.740 [2024-07-25 19:18:40.054743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.740 [2024-07-25 19:18:40.054793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.740 [2024-07-25 19:18:40.054809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.740 [2024-07-25 19:18:40.054816] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.740 [2024-07-25 19:18:40.054823] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.740 [2024-07-25 19:18:40.064545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.740 qpair failed and we were unable to recover it. 00:27:47.740 [2024-07-25 19:18:40.074932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.740 [2024-07-25 19:18:40.074977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.740 [2024-07-25 19:18:40.074992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.740 [2024-07-25 19:18:40.074999] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.740 [2024-07-25 19:18:40.075005] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.740 [2024-07-25 19:18:40.084642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.740 qpair failed and we were unable to recover it. 00:27:47.740 [2024-07-25 19:18:40.094888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.740 [2024-07-25 19:18:40.094939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.740 [2024-07-25 19:18:40.094954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.740 [2024-07-25 19:18:40.094961] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.740 [2024-07-25 19:18:40.094968] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.740 [2024-07-25 19:18:40.104606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.740 qpair failed and we were unable to recover it. 00:27:47.740 [2024-07-25 19:18:40.114862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.740 [2024-07-25 19:18:40.114909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.740 [2024-07-25 19:18:40.114923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.740 [2024-07-25 19:18:40.114929] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.740 [2024-07-25 19:18:40.114936] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.740 [2024-07-25 19:18:40.124721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.740 qpair failed and we were unable to recover it. 00:27:47.740 [2024-07-25 19:18:40.134877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.740 [2024-07-25 19:18:40.134917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.740 [2024-07-25 19:18:40.134931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.740 [2024-07-25 19:18:40.134938] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.740 [2024-07-25 19:18:40.134948] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.740 [2024-07-25 19:18:40.144686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.740 qpair failed and we were unable to recover it. 00:27:47.740 [2024-07-25 19:18:40.155078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.740 [2024-07-25 19:18:40.155118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.740 [2024-07-25 19:18:40.155132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.740 [2024-07-25 19:18:40.155139] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.740 [2024-07-25 19:18:40.155145] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.740 [2024-07-25 19:18:40.164843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.740 qpair failed and we were unable to recover it. 00:27:47.740 [2024-07-25 19:18:40.175037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.740 [2024-07-25 19:18:40.175076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.740 [2024-07-25 19:18:40.175090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.740 [2024-07-25 19:18:40.175097] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.740 [2024-07-25 19:18:40.175103] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.740 [2024-07-25 19:18:40.184873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.740 qpair failed and we were unable to recover it. 00:27:47.740 [2024-07-25 19:18:40.195169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:47.740 [2024-07-25 19:18:40.195212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:47.740 [2024-07-25 19:18:40.195226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:47.740 [2024-07-25 19:18:40.195233] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:47.740 [2024-07-25 19:18:40.195239] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.740 [2024-07-25 19:18:40.204994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.740 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-25 19:18:40.215254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.000 [2024-07-25 19:18:40.215289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.000 [2024-07-25 19:18:40.215304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.000 [2024-07-25 19:18:40.215311] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.000 [2024-07-25 19:18:40.215317] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.000 [2024-07-25 19:18:40.225118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-25 19:18:40.235355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.000 [2024-07-25 19:18:40.235393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.000 [2024-07-25 19:18:40.235408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.000 [2024-07-25 19:18:40.235415] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.000 [2024-07-25 19:18:40.235421] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.000 [2024-07-25 19:18:40.245107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-25 19:18:40.255309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.000 [2024-07-25 19:18:40.255351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.000 [2024-07-25 19:18:40.255365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.000 [2024-07-25 19:18:40.255372] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.000 [2024-07-25 19:18:40.255379] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.000 [2024-07-25 19:18:40.265145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-25 19:18:40.275382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.000 [2024-07-25 19:18:40.275420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.000 [2024-07-25 19:18:40.275435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.000 [2024-07-25 19:18:40.275442] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.000 [2024-07-25 19:18:40.275448] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.000 [2024-07-25 19:18:40.285195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-25 19:18:40.295456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.000 [2024-07-25 19:18:40.295496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.000 [2024-07-25 19:18:40.295510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.000 [2024-07-25 19:18:40.295517] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.000 [2024-07-25 19:18:40.295523] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.000 [2024-07-25 19:18:40.305248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-25 19:18:40.315566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.000 [2024-07-25 19:18:40.315605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.000 [2024-07-25 19:18:40.315623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.000 [2024-07-25 19:18:40.315630] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.000 [2024-07-25 19:18:40.315636] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.000 [2024-07-25 19:18:40.325318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-25 19:18:40.335550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.000 [2024-07-25 19:18:40.335586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.000 [2024-07-25 19:18:40.335600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.000 [2024-07-25 19:18:40.335607] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.000 [2024-07-25 19:18:40.335614] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.000 [2024-07-25 19:18:40.345232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-25 19:18:40.355520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.000 [2024-07-25 19:18:40.355555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.000 [2024-07-25 19:18:40.355569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.000 [2024-07-25 19:18:40.355576] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.000 [2024-07-25 19:18:40.355582] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.000 [2024-07-25 19:18:40.365488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-25 19:18:40.375757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.000 [2024-07-25 19:18:40.375793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.000 [2024-07-25 19:18:40.375807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.000 [2024-07-25 19:18:40.375814] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.000 [2024-07-25 19:18:40.375820] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.000 [2024-07-25 19:18:40.385389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-25 19:18:40.395829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.000 [2024-07-25 19:18:40.395868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.000 [2024-07-25 19:18:40.395882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.000 [2024-07-25 19:18:40.395889] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.000 [2024-07-25 19:18:40.395896] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.000 [2024-07-25 19:18:40.405509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.000 qpair failed and we were unable to recover it. 00:27:48.000 [2024-07-25 19:18:40.415705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.001 [2024-07-25 19:18:40.415745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.001 [2024-07-25 19:18:40.415759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.001 [2024-07-25 19:18:40.415766] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.001 [2024-07-25 19:18:40.415772] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.001 [2024-07-25 19:18:40.425611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-25 19:18:40.435799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.001 [2024-07-25 19:18:40.435845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.001 [2024-07-25 19:18:40.435859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.001 [2024-07-25 19:18:40.435866] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.001 [2024-07-25 19:18:40.435872] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.001 [2024-07-25 19:18:40.445665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.001 [2024-07-25 19:18:40.455832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.001 [2024-07-25 19:18:40.455866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.001 [2024-07-25 19:18:40.455879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.001 [2024-07-25 19:18:40.455886] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.001 [2024-07-25 19:18:40.455892] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.001 [2024-07-25 19:18:40.465730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.001 qpair failed and we were unable to recover it. 00:27:48.261 [2024-07-25 19:18:40.475996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.261 [2024-07-25 19:18:40.476034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.261 [2024-07-25 19:18:40.476048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.261 [2024-07-25 19:18:40.476055] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.261 [2024-07-25 19:18:40.476061] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.261 [2024-07-25 19:18:40.485753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.261 qpair failed and we were unable to recover it. 00:27:48.261 [2024-07-25 19:18:40.496055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.261 [2024-07-25 19:18:40.496095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.261 [2024-07-25 19:18:40.496110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.261 [2024-07-25 19:18:40.496117] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.261 [2024-07-25 19:18:40.496123] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.261 [2024-07-25 19:18:40.505902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.261 qpair failed and we were unable to recover it. 00:27:48.261 [2024-07-25 19:18:40.516138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.261 [2024-07-25 19:18:40.516173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.261 [2024-07-25 19:18:40.516187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.261 [2024-07-25 19:18:40.516194] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.261 [2024-07-25 19:18:40.516200] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.261 [2024-07-25 19:18:40.525976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.261 qpair failed and we were unable to recover it. 00:27:48.261 [2024-07-25 19:18:40.536121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.261 [2024-07-25 19:18:40.536160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.261 [2024-07-25 19:18:40.536174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.261 [2024-07-25 19:18:40.536181] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.261 [2024-07-25 19:18:40.536187] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.261 [2024-07-25 19:18:40.545813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.261 qpair failed and we were unable to recover it. 00:27:48.261 [2024-07-25 19:18:40.556260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.261 [2024-07-25 19:18:40.556301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.261 [2024-07-25 19:18:40.556315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.261 [2024-07-25 19:18:40.556322] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.261 [2024-07-25 19:18:40.556328] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.261 [2024-07-25 19:18:40.566141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.261 qpair failed and we were unable to recover it. 00:27:48.261 [2024-07-25 19:18:40.576137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.261 [2024-07-25 19:18:40.576179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.261 [2024-07-25 19:18:40.576193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.261 [2024-07-25 19:18:40.576204] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.261 [2024-07-25 19:18:40.576210] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.261 [2024-07-25 19:18:40.586042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.261 qpair failed and we were unable to recover it. 00:27:48.261 [2024-07-25 19:18:40.596382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.261 [2024-07-25 19:18:40.596422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.261 [2024-07-25 19:18:40.596436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.261 [2024-07-25 19:18:40.596443] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.261 [2024-07-25 19:18:40.596449] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.261 [2024-07-25 19:18:40.606054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.261 qpair failed and we were unable to recover it. 00:27:48.261 [2024-07-25 19:18:40.616449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.261 [2024-07-25 19:18:40.616483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.261 [2024-07-25 19:18:40.616497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.261 [2024-07-25 19:18:40.616504] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.261 [2024-07-25 19:18:40.616510] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.261 [2024-07-25 19:18:40.626142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.261 qpair failed and we were unable to recover it. 00:27:48.261 [2024-07-25 19:18:40.636482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.261 [2024-07-25 19:18:40.636522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.262 [2024-07-25 19:18:40.636536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.262 [2024-07-25 19:18:40.636543] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.262 [2024-07-25 19:18:40.636549] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.262 [2024-07-25 19:18:40.646217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.262 qpair failed and we were unable to recover it. 00:27:48.262 [2024-07-25 19:18:40.656478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.262 [2024-07-25 19:18:40.656519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.262 [2024-07-25 19:18:40.656534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.262 [2024-07-25 19:18:40.656541] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.262 [2024-07-25 19:18:40.656547] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.262 [2024-07-25 19:18:40.666303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.262 qpair failed and we were unable to recover it. 00:27:48.262 [2024-07-25 19:18:40.676548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.262 [2024-07-25 19:18:40.676584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.262 [2024-07-25 19:18:40.676598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.262 [2024-07-25 19:18:40.676605] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.262 [2024-07-25 19:18:40.676611] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.262 [2024-07-25 19:18:40.686394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.262 qpair failed and we were unable to recover it. 00:27:48.262 [2024-07-25 19:18:40.696654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.262 [2024-07-25 19:18:40.696694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.262 [2024-07-25 19:18:40.696708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.262 [2024-07-25 19:18:40.696715] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.262 [2024-07-25 19:18:40.696721] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.262 [2024-07-25 19:18:40.706259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.262 qpair failed and we were unable to recover it. 00:27:48.262 [2024-07-25 19:18:40.716690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.262 [2024-07-25 19:18:40.716730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.262 [2024-07-25 19:18:40.716745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.262 [2024-07-25 19:18:40.716751] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.262 [2024-07-25 19:18:40.716758] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.262 [2024-07-25 19:18:40.726516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.262 qpair failed and we were unable to recover it. 00:27:48.521 [2024-07-25 19:18:40.736772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.521 [2024-07-25 19:18:40.736817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.521 [2024-07-25 19:18:40.736832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.521 [2024-07-25 19:18:40.736839] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.521 [2024-07-25 19:18:40.736845] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.521 [2024-07-25 19:18:40.746535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.521 qpair failed and we were unable to recover it. 00:27:48.521 [2024-07-25 19:18:40.756971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.521 [2024-07-25 19:18:40.757008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.521 [2024-07-25 19:18:40.757026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.521 [2024-07-25 19:18:40.757033] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.521 [2024-07-25 19:18:40.757039] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.521 [2024-07-25 19:18:40.766587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.521 qpair failed and we were unable to recover it. 00:27:48.521 [2024-07-25 19:18:40.776809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.521 [2024-07-25 19:18:40.776843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.521 [2024-07-25 19:18:40.776857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.521 [2024-07-25 19:18:40.776864] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.521 [2024-07-25 19:18:40.776870] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.521 [2024-07-25 19:18:40.786641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.521 qpair failed and we were unable to recover it. 00:27:48.521 [2024-07-25 19:18:40.796935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.521 [2024-07-25 19:18:40.796974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.521 [2024-07-25 19:18:40.796987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.521 [2024-07-25 19:18:40.796994] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.521 [2024-07-25 19:18:40.797000] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.521 [2024-07-25 19:18:40.806741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.521 qpair failed and we were unable to recover it. 00:27:48.521 [2024-07-25 19:18:40.817086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.521 [2024-07-25 19:18:40.817129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.521 [2024-07-25 19:18:40.817144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.521 [2024-07-25 19:18:40.817151] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.521 [2024-07-25 19:18:40.817157] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.521 [2024-07-25 19:18:40.826818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.521 qpair failed and we were unable to recover it. 00:27:48.521 [2024-07-25 19:18:40.837111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.521 [2024-07-25 19:18:40.837149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.521 [2024-07-25 19:18:40.837163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.521 [2024-07-25 19:18:40.837170] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.521 [2024-07-25 19:18:40.837176] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.521 [2024-07-25 19:18:40.846730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.521 qpair failed and we were unable to recover it. 00:27:48.521 [2024-07-25 19:18:40.857094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.521 [2024-07-25 19:18:40.857128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.521 [2024-07-25 19:18:40.857142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.521 [2024-07-25 19:18:40.857149] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.521 [2024-07-25 19:18:40.857155] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.521 [2024-07-25 19:18:40.866880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.521 qpair failed and we were unable to recover it. 00:27:48.521 [2024-07-25 19:18:40.877158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.521 [2024-07-25 19:18:40.877197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.521 [2024-07-25 19:18:40.877212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.521 [2024-07-25 19:18:40.877219] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.521 [2024-07-25 19:18:40.877225] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.521 [2024-07-25 19:18:40.887048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.521 qpair failed and we were unable to recover it. 00:27:48.521 [2024-07-25 19:18:40.897240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.521 [2024-07-25 19:18:40.897281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.521 [2024-07-25 19:18:40.897295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.521 [2024-07-25 19:18:40.897302] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.522 [2024-07-25 19:18:40.897308] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.522 [2024-07-25 19:18:40.907137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.522 qpair failed and we were unable to recover it. 00:27:48.522 [2024-07-25 19:18:40.917335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.522 [2024-07-25 19:18:40.917373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.522 [2024-07-25 19:18:40.917386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.522 [2024-07-25 19:18:40.917393] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.522 [2024-07-25 19:18:40.917399] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.522 [2024-07-25 19:18:40.927141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.522 qpair failed and we were unable to recover it. 00:27:48.522 [2024-07-25 19:18:40.937347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.522 [2024-07-25 19:18:40.937384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.522 [2024-07-25 19:18:40.937398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.522 [2024-07-25 19:18:40.937405] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.522 [2024-07-25 19:18:40.937411] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.522 [2024-07-25 19:18:40.947212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.522 qpair failed and we were unable to recover it. 00:27:48.522 [2024-07-25 19:18:40.957438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.522 [2024-07-25 19:18:40.957476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.522 [2024-07-25 19:18:40.957490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.522 [2024-07-25 19:18:40.957497] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.522 [2024-07-25 19:18:40.957503] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.522 [2024-07-25 19:18:40.967087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.522 qpair failed and we were unable to recover it. 00:27:48.522 [2024-07-25 19:18:40.977483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.522 [2024-07-25 19:18:40.977526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.522 [2024-07-25 19:18:40.977539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.522 [2024-07-25 19:18:40.977546] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.522 [2024-07-25 19:18:40.977552] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.522 [2024-07-25 19:18:40.987269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.522 qpair failed and we were unable to recover it. 00:27:48.780 [2024-07-25 19:18:40.997683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.780 [2024-07-25 19:18:40.997717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.780 [2024-07-25 19:18:40.997731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.780 [2024-07-25 19:18:40.997738] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.780 [2024-07-25 19:18:40.997745] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.780 [2024-07-25 19:18:41.007469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.780 qpair failed and we were unable to recover it. 00:27:48.780 [2024-07-25 19:18:41.017639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.780 [2024-07-25 19:18:41.017679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.780 [2024-07-25 19:18:41.017693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.780 [2024-07-25 19:18:41.017703] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.780 [2024-07-25 19:18:41.017709] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.780 [2024-07-25 19:18:41.027558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.780 qpair failed and we were unable to recover it. 00:27:48.780 [2024-07-25 19:18:41.037736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.780 [2024-07-25 19:18:41.037777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.780 [2024-07-25 19:18:41.037791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.780 [2024-07-25 19:18:41.037798] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.780 [2024-07-25 19:18:41.037804] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.780 [2024-07-25 19:18:41.047591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.780 qpair failed and we were unable to recover it. 00:27:48.780 [2024-07-25 19:18:41.057795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.780 [2024-07-25 19:18:41.057837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.780 [2024-07-25 19:18:41.057851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.780 [2024-07-25 19:18:41.057858] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.780 [2024-07-25 19:18:41.057864] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.780 [2024-07-25 19:18:41.067568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.780 qpair failed and we were unable to recover it. 00:27:48.780 [2024-07-25 19:18:41.077757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.780 [2024-07-25 19:18:41.077793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.780 [2024-07-25 19:18:41.077807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.780 [2024-07-25 19:18:41.077814] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.780 [2024-07-25 19:18:41.077820] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.780 [2024-07-25 19:18:41.087786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.780 qpair failed and we were unable to recover it. 00:27:48.780 [2024-07-25 19:18:41.097844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.780 [2024-07-25 19:18:41.097885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.780 [2024-07-25 19:18:41.097903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.780 [2024-07-25 19:18:41.097910] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.780 [2024-07-25 19:18:41.097916] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.780 [2024-07-25 19:18:41.107804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.780 qpair failed and we were unable to recover it. 00:27:48.780 [2024-07-25 19:18:41.117937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.780 [2024-07-25 19:18:41.117975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.780 [2024-07-25 19:18:41.117989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.780 [2024-07-25 19:18:41.117996] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.780 [2024-07-25 19:18:41.118002] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.780 [2024-07-25 19:18:41.127675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.780 qpair failed and we were unable to recover it. 00:27:48.780 [2024-07-25 19:18:41.138058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.780 [2024-07-25 19:18:41.138095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.780 [2024-07-25 19:18:41.138108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.780 [2024-07-25 19:18:41.138115] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.780 [2024-07-25 19:18:41.138121] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.780 [2024-07-25 19:18:41.147660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.780 qpair failed and we were unable to recover it. 00:27:48.780 [2024-07-25 19:18:41.158029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.780 [2024-07-25 19:18:41.158071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.780 [2024-07-25 19:18:41.158085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.780 [2024-07-25 19:18:41.158092] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.780 [2024-07-25 19:18:41.158098] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.780 [2024-07-25 19:18:41.167810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.780 qpair failed and we were unable to recover it. 00:27:48.780 [2024-07-25 19:18:41.178122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.780 [2024-07-25 19:18:41.178160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.780 [2024-07-25 19:18:41.178174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.780 [2024-07-25 19:18:41.178181] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.780 [2024-07-25 19:18:41.178187] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.780 [2024-07-25 19:18:41.187841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.780 qpair failed and we were unable to recover it. 00:27:48.781 [2024-07-25 19:18:41.198155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.781 [2024-07-25 19:18:41.198191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.781 [2024-07-25 19:18:41.198211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.781 [2024-07-25 19:18:41.198219] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.781 [2024-07-25 19:18:41.198225] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.781 [2024-07-25 19:18:41.208046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.781 qpair failed and we were unable to recover it. 00:27:48.781 [2024-07-25 19:18:41.218360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.781 [2024-07-25 19:18:41.218399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.781 [2024-07-25 19:18:41.218413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.781 [2024-07-25 19:18:41.218420] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.781 [2024-07-25 19:18:41.218426] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.781 [2024-07-25 19:18:41.228059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.781 qpair failed and we were unable to recover it. 00:27:48.781 [2024-07-25 19:18:41.238318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:48.781 [2024-07-25 19:18:41.238355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:48.781 [2024-07-25 19:18:41.238370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:48.781 [2024-07-25 19:18:41.238377] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:48.781 [2024-07-25 19:18:41.238383] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.781 [2024-07-25 19:18:41.248134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.781 qpair failed and we were unable to recover it. 00:27:49.040 [2024-07-25 19:18:41.258438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.040 [2024-07-25 19:18:41.258472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.040 [2024-07-25 19:18:41.258486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.040 [2024-07-25 19:18:41.258493] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.040 [2024-07-25 19:18:41.258500] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:49.040 [2024-07-25 19:18:41.268353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.040 qpair failed and we were unable to recover it. 00:27:49.040 [2024-07-25 19:18:41.278556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.040 [2024-07-25 19:18:41.278596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.040 [2024-07-25 19:18:41.278610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.040 [2024-07-25 19:18:41.278617] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.040 [2024-07-25 19:18:41.278623] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:49.040 [2024-07-25 19:18:41.288229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.040 qpair failed and we were unable to recover it. 00:27:49.040 [2024-07-25 19:18:41.298530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.040 [2024-07-25 19:18:41.298570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.040 [2024-07-25 19:18:41.298584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.040 [2024-07-25 19:18:41.298591] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.040 [2024-07-25 19:18:41.298597] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:49.040 [2024-07-25 19:18:41.308358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.040 qpair failed and we were unable to recover it. 00:27:49.040 [2024-07-25 19:18:41.318572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.040 [2024-07-25 19:18:41.318610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.040 [2024-07-25 19:18:41.318624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.040 [2024-07-25 19:18:41.318631] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.040 [2024-07-25 19:18:41.318637] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:49.040 [2024-07-25 19:18:41.328347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.040 qpair failed and we were unable to recover it. 00:27:49.040 [2024-07-25 19:18:41.338623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.040 [2024-07-25 19:18:41.338662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.040 [2024-07-25 19:18:41.338675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.040 [2024-07-25 19:18:41.338682] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.040 [2024-07-25 19:18:41.338688] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:49.040 [2024-07-25 19:18:41.348460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.040 qpair failed and we were unable to recover it. 00:27:49.040 [2024-07-25 19:18:41.358713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.040 [2024-07-25 19:18:41.358753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.040 [2024-07-25 19:18:41.358767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.040 [2024-07-25 19:18:41.358774] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.040 [2024-07-25 19:18:41.358780] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:49.040 [2024-07-25 19:18:41.368472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.040 qpair failed and we were unable to recover it. 00:27:49.040 [2024-07-25 19:18:41.378747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.040 [2024-07-25 19:18:41.378789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.040 [2024-07-25 19:18:41.378803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.040 [2024-07-25 19:18:41.378810] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.040 [2024-07-25 19:18:41.378816] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:49.040 [2024-07-25 19:18:41.388407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.040 qpair failed and we were unable to recover it. 00:27:49.040 [2024-07-25 19:18:41.398748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.040 [2024-07-25 19:18:41.398783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.040 [2024-07-25 19:18:41.398797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.040 [2024-07-25 19:18:41.398804] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.040 [2024-07-25 19:18:41.398810] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:49.040 [2024-07-25 19:18:41.408399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.040 qpair failed and we were unable to recover it. 00:27:49.040 [2024-07-25 19:18:41.418764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.040 [2024-07-25 19:18:41.418809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.040 [2024-07-25 19:18:41.418823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.040 [2024-07-25 19:18:41.418830] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.040 [2024-07-25 19:18:41.418835] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:49.040 [2024-07-25 19:18:41.428421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.040 qpair failed and we were unable to recover it. 00:27:49.040 [2024-07-25 19:18:41.438824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.040 [2024-07-25 19:18:41.438861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.040 [2024-07-25 19:18:41.438875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.040 [2024-07-25 19:18:41.438882] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.040 [2024-07-25 19:18:41.438888] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:49.040 [2024-07-25 19:18:41.448600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.040 qpair failed and we were unable to recover it. 00:27:49.040 [2024-07-25 19:18:41.459001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.040 [2024-07-25 19:18:41.459037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.040 [2024-07-25 19:18:41.459051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.040 [2024-07-25 19:18:41.459061] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.040 [2024-07-25 19:18:41.459068] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:49.040 [2024-07-25 19:18:41.468639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.041 qpair failed and we were unable to recover it. 00:27:49.041 [2024-07-25 19:18:41.479036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.041 [2024-07-25 19:18:41.479074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.041 [2024-07-25 19:18:41.479088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.041 [2024-07-25 19:18:41.479097] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.041 [2024-07-25 19:18:41.479104] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:49.041 [2024-07-25 19:18:41.488805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.041 qpair failed and we were unable to recover it. 00:27:49.041 [2024-07-25 19:18:41.498985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.041 [2024-07-25 19:18:41.499024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.041 [2024-07-25 19:18:41.499039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.041 [2024-07-25 19:18:41.499045] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.041 [2024-07-25 19:18:41.499051] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:49.300 [2024-07-25 19:18:41.508879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-25 19:18:41.519055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.300 [2024-07-25 19:18:41.519094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.300 [2024-07-25 19:18:41.519108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.300 [2024-07-25 19:18:41.519115] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.300 [2024-07-25 19:18:41.519122] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:49.300 [2024-07-25 19:18:41.529040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-25 19:18:41.539107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.300 [2024-07-25 19:18:41.539148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.300 [2024-07-25 19:18:41.539162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.300 [2024-07-25 19:18:41.539169] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.300 [2024-07-25 19:18:41.539175] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:49.300 [2024-07-25 19:18:41.549085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-25 19:18:41.559201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.300 [2024-07-25 19:18:41.559240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.300 [2024-07-25 19:18:41.559253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.300 [2024-07-25 19:18:41.559260] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.300 [2024-07-25 19:18:41.559266] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:49.300 [2024-07-25 19:18:41.569101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.300 qpair failed and we were unable to recover it. 00:27:49.300 [2024-07-25 19:18:41.579196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:49.300 [2024-07-25 19:18:41.579231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:49.300 [2024-07-25 19:18:41.579245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:49.300 [2024-07-25 19:18:41.579252] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:49.300 [2024-07-25 19:18:41.579258] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:49.300 [2024-07-25 19:18:41.588916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.300 qpair failed and we were unable to recover it. 00:27:50.236 Write completed with error (sct=0, sc=8) 00:27:50.236 starting I/O failed 00:27:50.236 Read completed with error (sct=0, sc=8) 00:27:50.236 starting I/O failed 00:27:50.236 Read completed with error (sct=0, sc=8) 00:27:50.236 starting I/O failed 00:27:50.236 Write completed with error (sct=0, sc=8) 00:27:50.236 starting I/O failed 00:27:50.236 Read completed with error (sct=0, sc=8) 00:27:50.236 starting I/O failed 00:27:50.236 Write completed with error (sct=0, sc=8) 00:27:50.236 starting I/O failed 00:27:50.236 Write completed with error (sct=0, sc=8) 00:27:50.236 starting I/O failed 00:27:50.236 Write completed with error (sct=0, sc=8) 00:27:50.236 starting I/O failed 00:27:50.236 Read completed with error (sct=0, sc=8) 00:27:50.236 starting I/O failed 00:27:50.236 Write completed with error (sct=0, sc=8) 00:27:50.236 starting I/O failed 00:27:50.236 Read completed with error (sct=0, sc=8) 00:27:50.236 starting I/O failed 00:27:50.236 Write completed with error (sct=0, sc=8) 00:27:50.236 starting I/O failed 00:27:50.236 Read completed with error (sct=0, sc=8) 00:27:50.236 starting I/O failed 00:27:50.236 Read completed with error (sct=0, sc=8) 00:27:50.236 starting I/O failed 00:27:50.236 Read completed with error (sct=0, sc=8) 00:27:50.236 starting I/O failed 00:27:50.236 Write completed with error (sct=0, sc=8) 00:27:50.236 starting I/O failed 00:27:50.236 Write completed with error (sct=0, sc=8) 00:27:50.236 starting I/O failed 00:27:50.236 Write completed with error (sct=0, sc=8) 00:27:50.236 starting I/O failed 00:27:50.236 Read completed with error (sct=0, sc=8) 00:27:50.236 starting I/O failed 00:27:50.236 Write completed with error (sct=0, sc=8) 00:27:50.236 starting I/O failed 00:27:50.236 Read completed with error (sct=0, sc=8) 00:27:50.236 starting I/O failed 00:27:50.236 Read completed with error (sct=0, sc=8) 00:27:50.236 starting I/O failed 00:27:50.236 Read completed with error (sct=0, sc=8) 00:27:50.236 starting I/O failed 00:27:50.236 Write completed with error (sct=0, sc=8) 00:27:50.236 starting I/O failed 00:27:50.236 Write completed with error (sct=0, sc=8) 00:27:50.236 starting I/O failed 00:27:50.236 Write completed with error (sct=0, sc=8) 00:27:50.236 starting I/O failed 00:27:50.236 Write completed with error (sct=0, sc=8) 00:27:50.236 starting I/O failed 00:27:50.236 Write completed with error (sct=0, sc=8) 00:27:50.236 starting I/O failed 00:27:50.236 Read completed with error (sct=0, sc=8) 00:27:50.237 starting I/O failed 00:27:50.237 Read completed with error (sct=0, sc=8) 00:27:50.237 starting I/O failed 00:27:50.237 Read completed with error (sct=0, sc=8) 00:27:50.237 starting I/O failed 00:27:50.237 Write completed with error (sct=0, sc=8) 00:27:50.237 starting I/O failed 00:27:50.237 [2024-07-25 19:18:42.593528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.237 [2024-07-25 19:18:42.602329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.237 [2024-07-25 19:18:42.602376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.237 [2024-07-25 19:18:42.602393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.237 [2024-07-25 19:18:42.602403] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.237 [2024-07-25 19:18:42.602411] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:27:50.237 [2024-07-25 19:18:42.612029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.237 qpair failed and we were unable to recover it. 00:27:50.237 [2024-07-25 19:18:42.622455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:50.237 [2024-07-25 19:18:42.622499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:50.237 [2024-07-25 19:18:42.622515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:50.237 [2024-07-25 19:18:42.622523] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:50.237 [2024-07-25 19:18:42.622529] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:27:50.237 [2024-07-25 19:18:42.632095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:50.237 qpair failed and we were unable to recover it. 00:27:50.237 [2024-07-25 19:18:42.632240] nvme_ctrlr.c:4480:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:27:50.237 A controller has encountered a failure and is being reset. 00:27:50.237 [2024-07-25 19:18:42.632355] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:27:50.237 [2024-07-25 19:18:42.633808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:50.237 Controller properly reset. 00:27:51.611 Read completed with error (sct=0, sc=8) 00:27:51.611 starting I/O failed 00:27:51.611 Read completed with error (sct=0, sc=8) 00:27:51.611 starting I/O failed 00:27:51.611 Write completed with error (sct=0, sc=8) 00:27:51.611 starting I/O failed 00:27:51.611 Write completed with error (sct=0, sc=8) 00:27:51.611 starting I/O failed 00:27:51.611 Read completed with error (sct=0, sc=8) 00:27:51.611 starting I/O failed 00:27:51.611 Write completed with error (sct=0, sc=8) 00:27:51.611 starting I/O failed 00:27:51.611 Write completed with error (sct=0, sc=8) 00:27:51.611 starting I/O failed 00:27:51.611 Write completed with error (sct=0, sc=8) 00:27:51.611 starting I/O failed 00:27:51.611 Write completed with error (sct=0, sc=8) 00:27:51.611 starting I/O failed 00:27:51.611 Write completed with error (sct=0, sc=8) 00:27:51.611 starting I/O failed 00:27:51.611 Read completed with error (sct=0, sc=8) 00:27:51.611 starting I/O failed 00:27:51.611 Read completed with error (sct=0, sc=8) 00:27:51.611 starting I/O failed 00:27:51.611 Write completed with error (sct=0, sc=8) 00:27:51.611 starting I/O failed 00:27:51.611 Write completed with error (sct=0, sc=8) 00:27:51.611 starting I/O failed 00:27:51.611 Write completed with error (sct=0, sc=8) 00:27:51.611 starting I/O failed 00:27:51.611 Read completed with error (sct=0, sc=8) 00:27:51.611 starting I/O failed 00:27:51.611 Read completed with error (sct=0, sc=8) 00:27:51.611 starting I/O failed 00:27:51.611 Write completed with error (sct=0, sc=8) 00:27:51.611 starting I/O failed 00:27:51.612 Write completed with error (sct=0, sc=8) 00:27:51.612 starting I/O failed 00:27:51.612 Read completed with error (sct=0, sc=8) 00:27:51.612 starting I/O failed 00:27:51.612 Write completed with error (sct=0, sc=8) 00:27:51.612 starting I/O failed 00:27:51.612 Read completed with error (sct=0, sc=8) 00:27:51.612 starting I/O failed 00:27:51.612 Write completed with error (sct=0, sc=8) 00:27:51.612 starting I/O failed 00:27:51.612 Write completed with error (sct=0, sc=8) 00:27:51.612 starting I/O failed 00:27:51.612 Write completed with error (sct=0, sc=8) 00:27:51.612 starting I/O failed 00:27:51.612 Read completed with error (sct=0, sc=8) 00:27:51.612 starting I/O failed 00:27:51.612 Write completed with error (sct=0, sc=8) 00:27:51.612 starting I/O failed 00:27:51.612 Write completed with error (sct=0, sc=8) 00:27:51.612 starting I/O failed 00:27:51.612 Read completed with error (sct=0, sc=8) 00:27:51.612 starting I/O failed 00:27:51.612 Write completed with error (sct=0, sc=8) 00:27:51.612 starting I/O failed 00:27:51.612 Write completed with error (sct=0, sc=8) 00:27:51.612 starting I/O failed 00:27:51.612 Write completed with error (sct=0, sc=8) 00:27:51.612 starting I/O failed 00:27:51.612 [2024-07-25 19:18:43.647864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:52.548 Read completed with error (sct=0, sc=8) 00:27:52.548 starting I/O failed 00:27:52.548 Write completed with error (sct=0, sc=8) 00:27:52.548 starting I/O failed 00:27:52.548 Read completed with error (sct=0, sc=8) 00:27:52.548 starting I/O failed 00:27:52.548 Read completed with error (sct=0, sc=8) 00:27:52.548 starting I/O failed 00:27:52.548 Read completed with error (sct=0, sc=8) 00:27:52.548 starting I/O failed 00:27:52.548 Write completed with error (sct=0, sc=8) 00:27:52.548 starting I/O failed 00:27:52.548 Write completed with error (sct=0, sc=8) 00:27:52.548 starting I/O failed 00:27:52.548 Write completed with error (sct=0, sc=8) 00:27:52.548 starting I/O failed 00:27:52.548 Write completed with error (sct=0, sc=8) 00:27:52.548 starting I/O failed 00:27:52.548 Write completed with error (sct=0, sc=8) 00:27:52.548 starting I/O failed 00:27:52.548 Read completed with error (sct=0, sc=8) 00:27:52.548 starting I/O failed 00:27:52.548 Read completed with error (sct=0, sc=8) 00:27:52.548 starting I/O failed 00:27:52.548 Read completed with error (sct=0, sc=8) 00:27:52.548 starting I/O failed 00:27:52.548 Write completed with error (sct=0, sc=8) 00:27:52.548 starting I/O failed 00:27:52.548 Write completed with error (sct=0, sc=8) 00:27:52.548 starting I/O failed 00:27:52.548 Read completed with error (sct=0, sc=8) 00:27:52.548 starting I/O failed 00:27:52.548 Write completed with error (sct=0, sc=8) 00:27:52.548 starting I/O failed 00:27:52.548 Write completed with error (sct=0, sc=8) 00:27:52.548 starting I/O failed 00:27:52.548 Read completed with error (sct=0, sc=8) 00:27:52.548 starting I/O failed 00:27:52.548 Write completed with error (sct=0, sc=8) 00:27:52.548 starting I/O failed 00:27:52.548 Write completed with error (sct=0, sc=8) 00:27:52.548 starting I/O failed 00:27:52.548 Read completed with error (sct=0, sc=8) 00:27:52.548 starting I/O failed 00:27:52.548 Read completed with error (sct=0, sc=8) 00:27:52.548 starting I/O failed 00:27:52.548 Write completed with error (sct=0, sc=8) 00:27:52.548 starting I/O failed 00:27:52.548 Write completed with error (sct=0, sc=8) 00:27:52.548 starting I/O failed 00:27:52.548 Read completed with error (sct=0, sc=8) 00:27:52.548 starting I/O failed 00:27:52.548 Write completed with error (sct=0, sc=8) 00:27:52.548 starting I/O failed 00:27:52.548 Write completed with error (sct=0, sc=8) 00:27:52.548 starting I/O failed 00:27:52.549 Write completed with error (sct=0, sc=8) 00:27:52.549 starting I/O failed 00:27:52.549 Write completed with error (sct=0, sc=8) 00:27:52.549 starting I/O failed 00:27:52.549 Write completed with error (sct=0, sc=8) 00:27:52.549 starting I/O failed 00:27:52.549 Read completed with error (sct=0, sc=8) 00:27:52.549 starting I/O failed 00:27:52.549 [2024-07-25 19:18:44.652756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.549 Initializing NVMe Controllers 00:27:52.549 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:52.549 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:52.549 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:52.549 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:52.549 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:52.549 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:52.549 Initialization complete. Launching workers. 00:27:52.549 Starting thread on core 1 00:27:52.549 Starting thread on core 2 00:27:52.549 Starting thread on core 3 00:27:52.549 Starting thread on core 0 00:27:52.549 19:18:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:52.549 00:27:52.549 real 0m14.491s 00:27:52.549 user 0m28.546s 00:27:52.549 sys 0m2.841s 00:27:52.549 19:18:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:52.549 19:18:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:52.549 ************************************ 00:27:52.549 END TEST nvmf_target_disconnect_tc2 00:27:52.549 ************************************ 00:27:52.549 19:18:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:27:52.549 19:18:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:27:52.549 19:18:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:52.549 19:18:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:52.549 19:18:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:52.549 ************************************ 00:27:52.549 START TEST nvmf_target_disconnect_tc3 00:27:52.549 ************************************ 00:27:52.549 19:18:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc3 00:27:52.549 19:18:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=916694 00:27:52.549 19:18:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:27:52.549 19:18:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:27:52.549 EAL: No free 2048 kB hugepages reported on node 1 00:27:54.452 19:18:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 915095 00:27:54.452 19:18:46 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:27:55.929 Read completed with error (sct=0, sc=8) 00:27:55.929 starting I/O failed 00:27:55.929 Read completed with error (sct=0, sc=8) 00:27:55.929 starting I/O failed 00:27:55.929 Write completed with error (sct=0, sc=8) 00:27:55.929 starting I/O failed 00:27:55.929 Read completed with error (sct=0, sc=8) 00:27:55.929 starting I/O failed 00:27:55.929 Read completed with error (sct=0, sc=8) 00:27:55.929 starting I/O failed 00:27:55.929 Read completed with error (sct=0, sc=8) 00:27:55.929 starting I/O failed 00:27:55.929 Read completed with error (sct=0, sc=8) 00:27:55.929 starting I/O failed 00:27:55.929 Write completed with error (sct=0, sc=8) 00:27:55.929 starting I/O failed 00:27:55.929 Write completed with error (sct=0, sc=8) 00:27:55.929 starting I/O failed 00:27:55.929 Write completed with error (sct=0, sc=8) 00:27:55.929 starting I/O failed 00:27:55.929 Write completed with error (sct=0, sc=8) 00:27:55.929 starting I/O failed 00:27:55.929 Write completed with error (sct=0, sc=8) 00:27:55.929 starting I/O failed 00:27:55.929 Read completed with error (sct=0, sc=8) 00:27:55.929 starting I/O failed 00:27:55.929 Write completed with error (sct=0, sc=8) 00:27:55.929 starting I/O failed 00:27:55.929 Read completed with error (sct=0, sc=8) 00:27:55.929 starting I/O failed 00:27:55.929 Read completed with error (sct=0, sc=8) 00:27:55.929 starting I/O failed 00:27:55.929 Read completed with error (sct=0, sc=8) 00:27:55.929 starting I/O failed 00:27:55.929 Write completed with error (sct=0, sc=8) 00:27:55.929 starting I/O failed 00:27:55.929 Write completed with error (sct=0, sc=8) 00:27:55.929 starting I/O failed 00:27:55.929 Write completed with error (sct=0, sc=8) 00:27:55.929 starting I/O failed 00:27:55.929 Read completed with error (sct=0, sc=8) 00:27:55.929 starting I/O failed 00:27:55.929 Read completed with error (sct=0, sc=8) 00:27:55.929 starting I/O failed 00:27:55.929 Read completed with error (sct=0, sc=8) 00:27:55.929 starting I/O failed 00:27:55.929 Read completed with error (sct=0, sc=8) 00:27:55.929 starting I/O failed 00:27:55.929 Read completed with error (sct=0, sc=8) 00:27:55.929 starting I/O failed 00:27:55.929 Write completed with error (sct=0, sc=8) 00:27:55.929 starting I/O failed 00:27:55.929 Write completed with error (sct=0, sc=8) 00:27:55.929 starting I/O failed 00:27:55.929 Write completed with error (sct=0, sc=8) 00:27:55.929 starting I/O failed 00:27:55.929 Write completed with error (sct=0, sc=8) 00:27:55.929 starting I/O failed 00:27:55.929 Write completed with error (sct=0, sc=8) 00:27:55.929 starting I/O failed 00:27:55.929 Write completed with error (sct=0, sc=8) 00:27:55.929 starting I/O failed 00:27:55.929 Read completed with error (sct=0, sc=8) 00:27:55.929 starting I/O failed 00:27:55.929 [2024-07-25 19:18:47.965189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:56.553 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 915095 Killed "${NVMF_APP[@]}" "$@" 00:27:56.553 19:18:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:27:56.553 19:18:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:56.553 19:18:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:56.553 19:18:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:56.553 19:18:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:56.553 19:18:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@481 -- # nvmfpid=917247 00:27:56.553 19:18:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@482 -- # waitforlisten 917247 00:27:56.553 19:18:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:56.553 19:18:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@831 -- # '[' -z 917247 ']' 00:27:56.553 19:18:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.553 19:18:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:56.553 19:18:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.553 19:18:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:56.553 19:18:48 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:56.553 [2024-07-25 19:18:48.846133] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:56.553 [2024-07-25 19:18:48.846180] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:56.553 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.553 [2024-07-25 19:18:48.918804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:56.553 Read completed with error (sct=0, sc=8) 00:27:56.553 starting I/O failed 00:27:56.553 Write completed with error (sct=0, sc=8) 00:27:56.553 starting I/O failed 00:27:56.553 Write completed with error (sct=0, sc=8) 00:27:56.553 starting I/O failed 00:27:56.553 Write completed with error (sct=0, sc=8) 00:27:56.553 starting I/O failed 00:27:56.553 Read completed with error (sct=0, sc=8) 00:27:56.553 starting I/O failed 00:27:56.553 Read completed with error (sct=0, sc=8) 00:27:56.553 starting I/O failed 00:27:56.553 Read completed with error (sct=0, sc=8) 00:27:56.553 starting I/O failed 00:27:56.553 Write completed with error (sct=0, sc=8) 00:27:56.553 starting I/O failed 00:27:56.553 Write completed with error (sct=0, sc=8) 00:27:56.553 starting I/O failed 00:27:56.553 Write completed with error (sct=0, sc=8) 00:27:56.553 starting I/O failed 00:27:56.553 Read completed with error (sct=0, sc=8) 00:27:56.553 starting I/O failed 00:27:56.553 Write completed with error (sct=0, sc=8) 00:27:56.553 starting I/O failed 00:27:56.553 Read completed with error (sct=0, sc=8) 00:27:56.553 starting I/O failed 00:27:56.553 Read completed with error (sct=0, sc=8) 00:27:56.553 starting I/O failed 00:27:56.553 Write completed with error (sct=0, sc=8) 00:27:56.553 starting I/O failed 00:27:56.553 Read completed with error (sct=0, sc=8) 00:27:56.553 starting I/O failed 00:27:56.553 Read completed with error (sct=0, sc=8) 00:27:56.553 starting I/O failed 00:27:56.553 Read completed with error (sct=0, sc=8) 00:27:56.553 starting I/O failed 00:27:56.553 Write completed with error (sct=0, sc=8) 00:27:56.553 starting I/O failed 00:27:56.553 Write completed with error (sct=0, sc=8) 00:27:56.553 starting I/O failed 00:27:56.553 Write completed with error (sct=0, sc=8) 00:27:56.553 starting I/O failed 00:27:56.553 Write completed with error (sct=0, sc=8) 00:27:56.553 starting I/O failed 00:27:56.554 Write completed with error (sct=0, sc=8) 00:27:56.554 starting I/O failed 00:27:56.554 Write completed with error (sct=0, sc=8) 00:27:56.554 starting I/O failed 00:27:56.554 Write completed with error (sct=0, sc=8) 00:27:56.554 starting I/O failed 00:27:56.554 Write completed with error (sct=0, sc=8) 00:27:56.554 starting I/O failed 00:27:56.554 Write completed with error (sct=0, sc=8) 00:27:56.554 starting I/O failed 00:27:56.554 Read completed with error (sct=0, sc=8) 00:27:56.554 starting I/O failed 00:27:56.554 Write completed with error (sct=0, sc=8) 00:27:56.554 starting I/O failed 00:27:56.554 Write completed with error (sct=0, sc=8) 00:27:56.554 starting I/O failed 00:27:56.554 Read completed with error (sct=0, sc=8) 00:27:56.554 starting I/O failed 00:27:56.554 Read completed with error (sct=0, sc=8) 00:27:56.554 starting I/O failed 00:27:56.554 [2024-07-25 19:18:48.969633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.554 [2024-07-25 19:18:48.997508] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:56.554 [2024-07-25 19:18:48.997542] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:56.554 [2024-07-25 19:18:48.997549] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:56.554 [2024-07-25 19:18:48.997555] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:56.554 [2024-07-25 19:18:48.997560] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:56.554 [2024-07-25 19:18:48.997678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:56.554 [2024-07-25 19:18:48.997792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:56.554 [2024-07-25 19:18:48.997905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:56.554 [2024-07-25 19:18:48.997939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:57.491 19:18:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:57.491 19:18:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@864 -- # return 0 00:27:57.491 19:18:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:57.491 19:18:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:57.491 19:18:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:57.491 19:18:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:57.491 19:18:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:57.491 19:18:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.491 19:18:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:57.491 Malloc0 00:27:57.491 19:18:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.491 19:18:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:27:57.491 19:18:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.491 19:18:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:57.491 [2024-07-25 19:18:49.779089] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa21f50/0xa2dad0) succeed. 00:27:57.491 [2024-07-25 19:18:49.788908] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa23590/0xaadb40) succeed. 00:27:57.491 19:18:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.491 19:18:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:57.491 19:18:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.491 19:18:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:57.491 19:18:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.491 19:18:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:57.491 19:18:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.491 19:18:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:57.491 19:18:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.491 19:18:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:27:57.491 19:18:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.491 19:18:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:57.491 [2024-07-25 19:18:49.933880] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:27:57.491 19:18:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.491 19:18:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:27:57.491 19:18:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.491 19:18:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:57.491 19:18:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.491 19:18:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 916694 00:27:57.751 Write completed with error (sct=0, sc=8) 00:27:57.751 starting I/O failed 00:27:57.751 Write completed with error (sct=0, sc=8) 00:27:57.751 starting I/O failed 00:27:57.751 Read completed with error (sct=0, sc=8) 00:27:57.751 starting I/O failed 00:27:57.751 Read completed with error (sct=0, sc=8) 00:27:57.751 starting I/O failed 00:27:57.751 Read completed with error (sct=0, sc=8) 00:27:57.751 starting I/O failed 00:27:57.751 Read completed with error (sct=0, sc=8) 00:27:57.751 starting I/O failed 00:27:57.751 Read completed with error (sct=0, sc=8) 00:27:57.751 starting I/O failed 00:27:57.751 Read completed with error (sct=0, sc=8) 00:27:57.751 starting I/O failed 00:27:57.751 Read completed with error (sct=0, sc=8) 00:27:57.751 starting I/O failed 00:27:57.751 Write completed with error (sct=0, sc=8) 00:27:57.751 starting I/O failed 00:27:57.751 Write completed with error (sct=0, sc=8) 00:27:57.751 starting I/O failed 00:27:57.751 Read completed with error (sct=0, sc=8) 00:27:57.751 starting I/O failed 00:27:57.751 Write completed with error (sct=0, sc=8) 00:27:57.751 starting I/O failed 00:27:57.751 Write completed with error (sct=0, sc=8) 00:27:57.751 starting I/O failed 00:27:57.751 Read completed with error (sct=0, sc=8) 00:27:57.751 starting I/O failed 00:27:57.751 Write completed with error (sct=0, sc=8) 00:27:57.751 starting I/O failed 00:27:57.751 Read completed with error (sct=0, sc=8) 00:27:57.751 starting I/O failed 00:27:57.751 Write completed with error (sct=0, sc=8) 00:27:57.751 starting I/O failed 00:27:57.751 Write completed with error (sct=0, sc=8) 00:27:57.751 starting I/O failed 00:27:57.751 Read completed with error (sct=0, sc=8) 00:27:57.751 starting I/O failed 00:27:57.751 Read completed with error (sct=0, sc=8) 00:27:57.751 starting I/O failed 00:27:57.751 Read completed with error (sct=0, sc=8) 00:27:57.751 starting I/O failed 00:27:57.751 Write completed with error (sct=0, sc=8) 00:27:57.751 starting I/O failed 00:27:57.751 Write completed with error (sct=0, sc=8) 00:27:57.751 starting I/O failed 00:27:57.751 Read completed with error (sct=0, sc=8) 00:27:57.751 starting I/O failed 00:27:57.751 Read completed with error (sct=0, sc=8) 00:27:57.751 starting I/O failed 00:27:57.751 Read completed with error (sct=0, sc=8) 00:27:57.751 starting I/O failed 00:27:57.751 Write completed with error (sct=0, sc=8) 00:27:57.751 starting I/O failed 00:27:57.751 Read completed with error (sct=0, sc=8) 00:27:57.751 starting I/O failed 00:27:57.751 Write completed with error (sct=0, sc=8) 00:27:57.751 starting I/O failed 00:27:57.751 Write completed with error (sct=0, sc=8) 00:27:57.751 starting I/O failed 00:27:57.751 Write completed with error (sct=0, sc=8) 00:27:57.751 starting I/O failed 00:27:57.751 [2024-07-25 19:18:49.974065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.751 [2024-07-25 19:18:49.975676] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:57.751 [2024-07-25 19:18:49.975705] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:57.751 [2024-07-25 19:18:49.975711] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:58.688 [2024-07-25 19:18:50.979368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.688 qpair failed and we were unable to recover it. 00:27:58.688 [2024-07-25 19:18:50.980957] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:58.688 [2024-07-25 19:18:50.980975] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:58.688 [2024-07-25 19:18:50.980981] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:59.625 [2024-07-25 19:18:51.984787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.625 qpair failed and we were unable to recover it. 00:27:59.625 [2024-07-25 19:18:51.986228] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:59.625 [2024-07-25 19:18:51.986243] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:59.625 [2024-07-25 19:18:51.986250] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:28:00.561 [2024-07-25 19:18:52.989912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.561 qpair failed and we were unable to recover it. 00:28:00.562 [2024-07-25 19:18:52.991379] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:00.562 [2024-07-25 19:18:52.991393] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:00.562 [2024-07-25 19:18:52.991400] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:28:01.938 [2024-07-25 19:18:53.995298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.938 qpair failed and we were unable to recover it. 00:28:01.938 [2024-07-25 19:18:53.996773] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:01.938 [2024-07-25 19:18:53.996787] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:01.938 [2024-07-25 19:18:53.996794] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:28:02.874 [2024-07-25 19:18:55.000555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-07-25 19:18:55.002014] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:02.874 [2024-07-25 19:18:55.002029] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:02.874 [2024-07-25 19:18:55.002035] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:28:03.806 [2024-07-25 19:18:56.005897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:03.806 qpair failed and we were unable to recover it. 00:28:03.806 [2024-07-25 19:18:56.007374] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:03.806 [2024-07-25 19:18:56.007389] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:03.806 [2024-07-25 19:18:56.007396] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:28:04.742 [2024-07-25 19:18:57.011095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:04.742 qpair failed and we were unable to recover it. 00:28:04.742 [2024-07-25 19:18:57.013087] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:04.742 [2024-07-25 19:18:57.013142] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:04.742 [2024-07-25 19:18:57.013163] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:28:05.678 [2024-07-25 19:18:58.017089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:05.678 qpair failed and we were unable to recover it. 00:28:05.678 [2024-07-25 19:18:58.018520] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:05.678 [2024-07-25 19:18:58.018535] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:05.678 [2024-07-25 19:18:58.018540] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:28:06.613 [2024-07-25 19:18:59.022137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.613 qpair failed and we were unable to recover it. 00:28:06.613 [2024-07-25 19:18:59.022259] nvme_ctrlr.c:4480:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:28:06.613 A controller has encountered a failure and is being reset. 00:28:06.613 Resorting to new failover address 192.168.100.9 00:28:06.613 [2024-07-25 19:18:59.022355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.613 [2024-07-25 19:18:59.022414] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:28:06.613 [2024-07-25 19:18:59.023802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:06.613 Controller properly reset. 00:28:07.990 Read completed with error (sct=0, sc=8) 00:28:07.990 starting I/O failed 00:28:07.990 Write completed with error (sct=0, sc=8) 00:28:07.990 starting I/O failed 00:28:07.990 Read completed with error (sct=0, sc=8) 00:28:07.990 starting I/O failed 00:28:07.990 Write completed with error (sct=0, sc=8) 00:28:07.990 starting I/O failed 00:28:07.990 Read completed with error (sct=0, sc=8) 00:28:07.990 starting I/O failed 00:28:07.990 Write completed with error (sct=0, sc=8) 00:28:07.990 starting I/O failed 00:28:07.990 Write completed with error (sct=0, sc=8) 00:28:07.990 starting I/O failed 00:28:07.990 Read completed with error (sct=0, sc=8) 00:28:07.990 starting I/O failed 00:28:07.990 Read completed with error (sct=0, sc=8) 00:28:07.990 starting I/O failed 00:28:07.990 Write completed with error (sct=0, sc=8) 00:28:07.990 starting I/O failed 00:28:07.990 Write completed with error (sct=0, sc=8) 00:28:07.990 starting I/O failed 00:28:07.990 Write completed with error (sct=0, sc=8) 00:28:07.990 starting I/O failed 00:28:07.990 Write completed with error (sct=0, sc=8) 00:28:07.990 starting I/O failed 00:28:07.990 Write completed with error (sct=0, sc=8) 00:28:07.990 starting I/O failed 00:28:07.990 Read completed with error (sct=0, sc=8) 00:28:07.990 starting I/O failed 00:28:07.990 Read completed with error (sct=0, sc=8) 00:28:07.990 starting I/O failed 00:28:07.990 Read completed with error (sct=0, sc=8) 00:28:07.990 starting I/O failed 00:28:07.990 Write completed with error (sct=0, sc=8) 00:28:07.990 starting I/O failed 00:28:07.990 Read completed with error (sct=0, sc=8) 00:28:07.990 starting I/O failed 00:28:07.990 Write completed with error (sct=0, sc=8) 00:28:07.990 starting I/O failed 00:28:07.990 Read completed with error (sct=0, sc=8) 00:28:07.990 starting I/O failed 00:28:07.990 Read completed with error (sct=0, sc=8) 00:28:07.990 starting I/O failed 00:28:07.990 Write completed with error (sct=0, sc=8) 00:28:07.990 starting I/O failed 00:28:07.990 Read completed with error (sct=0, sc=8) 00:28:07.990 starting I/O failed 00:28:07.990 Write completed with error (sct=0, sc=8) 00:28:07.990 starting I/O failed 00:28:07.990 Write completed with error (sct=0, sc=8) 00:28:07.990 starting I/O failed 00:28:07.990 Read completed with error (sct=0, sc=8) 00:28:07.990 starting I/O failed 00:28:07.990 Write completed with error (sct=0, sc=8) 00:28:07.990 starting I/O failed 00:28:07.990 Write completed with error (sct=0, sc=8) 00:28:07.990 starting I/O failed 00:28:07.990 Write completed with error (sct=0, sc=8) 00:28:07.990 starting I/O failed 00:28:07.990 Write completed with error (sct=0, sc=8) 00:28:07.990 starting I/O failed 00:28:07.990 Write completed with error (sct=0, sc=8) 00:28:07.990 starting I/O failed 00:28:07.990 [2024-07-25 19:19:00.071368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.990 Initializing NVMe Controllers 00:28:07.990 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:07.990 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:07.990 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:07.990 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:07.990 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:07.990 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:07.991 Initialization complete. Launching workers. 00:28:07.991 Starting thread on core 1 00:28:07.991 Starting thread on core 2 00:28:07.991 Starting thread on core 3 00:28:07.991 Starting thread on core 0 00:28:07.991 19:19:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:28:07.991 00:28:07.991 real 0m15.344s 00:28:07.991 user 1m5.291s 00:28:07.991 sys 0m3.675s 00:28:07.991 19:19:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:07.991 19:19:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:07.991 ************************************ 00:28:07.991 END TEST nvmf_target_disconnect_tc3 00:28:07.991 ************************************ 00:28:07.991 19:19:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:07.991 19:19:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:07.991 19:19:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:07.991 19:19:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:28:07.991 19:19:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:28:07.991 19:19:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:28:07.991 19:19:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:28:07.991 19:19:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:07.991 19:19:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:28:07.991 rmmod nvme_rdma 00:28:07.991 rmmod nvme_fabrics 00:28:07.991 19:19:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:07.991 19:19:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:28:07.991 19:19:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:28:07.991 19:19:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 917247 ']' 00:28:07.991 19:19:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 917247 00:28:07.991 19:19:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 917247 ']' 00:28:07.991 19:19:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 917247 00:28:07.991 19:19:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:28:07.991 19:19:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:07.991 19:19:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 917247 00:28:07.991 19:19:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:28:07.991 19:19:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:28:07.991 19:19:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 917247' 00:28:07.991 killing process with pid 917247 00:28:07.991 19:19:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 917247 00:28:07.991 19:19:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 917247 00:28:08.250 19:19:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:08.250 19:19:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:28:08.250 00:28:08.250 real 0m37.510s 00:28:08.250 user 2m22.192s 00:28:08.250 sys 0m11.585s 00:28:08.250 19:19:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:08.250 19:19:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:08.250 ************************************ 00:28:08.250 END TEST nvmf_target_disconnect 00:28:08.250 ************************************ 00:28:08.250 19:19:00 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:08.250 00:28:08.250 real 5m8.302s 00:28:08.250 user 12m48.533s 00:28:08.250 sys 1m23.999s 00:28:08.250 19:19:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:08.250 19:19:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.250 ************************************ 00:28:08.250 END TEST nvmf_host 00:28:08.250 ************************************ 00:28:08.250 00:28:08.250 real 19m47.385s 00:28:08.250 user 50m53.600s 00:28:08.250 sys 4m36.742s 00:28:08.250 19:19:00 nvmf_rdma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:08.250 19:19:00 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:08.250 ************************************ 00:28:08.250 END TEST nvmf_rdma 00:28:08.250 ************************************ 00:28:08.250 19:19:00 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:28:08.250 19:19:00 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:08.250 19:19:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:08.250 19:19:00 -- common/autotest_common.sh@10 -- # set +x 00:28:08.250 ************************************ 00:28:08.250 START TEST spdkcli_nvmf_rdma 00:28:08.250 ************************************ 00:28:08.250 19:19:00 spdkcli_nvmf_rdma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:28:08.509 * Looking for test storage... 00:28:08.509 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80bdebd3-4c74-ea11-906e-0017a4403562 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=80bdebd3-4c74-ea11-906e-0017a4403562 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=919437 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 919437 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- common/autotest_common.sh@831 -- # '[' -z 919437 ']' 00:28:08.509 19:19:00 spdkcli_nvmf_rdma -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:08.510 19:19:00 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:28:08.510 19:19:00 spdkcli_nvmf_rdma -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:08.510 19:19:00 spdkcli_nvmf_rdma -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:08.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:08.510 19:19:00 spdkcli_nvmf_rdma -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:08.510 19:19:00 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:08.510 [2024-07-25 19:19:00.866087] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:08.510 [2024-07-25 19:19:00.866140] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid919437 ] 00:28:08.510 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.510 [2024-07-25 19:19:00.934305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:08.767 [2024-07-25 19:19:01.012618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:08.768 [2024-07-25 19:19:01.012621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.335 19:19:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:09.335 19:19:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@864 -- # return 0 00:28:09.335 19:19:01 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:28:09.335 19:19:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:09.335 19:19:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:09.335 19:19:01 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:28:09.335 19:19:01 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:28:09.335 19:19:01 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:28:09.335 19:19:01 spdkcli_nvmf_rdma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:28:09.335 19:19:01 spdkcli_nvmf_rdma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:09.335 19:19:01 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:09.335 19:19:01 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:09.335 19:19:01 spdkcli_nvmf_rdma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:09.335 19:19:01 spdkcli_nvmf_rdma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.335 19:19:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:09.335 19:19:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.335 19:19:01 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:09.335 19:19:01 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:09.335 19:19:01 spdkcli_nvmf_rdma -- nvmf/common.sh@285 -- # xtrace_disable 00:28:09.335 19:19:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # pci_devs=() 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # net_devs=() 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # e810=() 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # local -ga e810 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # x722=() 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # local -ga x722 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # mlx=() 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # local -ga mlx 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:15.900 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x15b3 - 0x1017)' 00:28:15.900 Found 0000:af:00.0 (0x15b3 - 0x1017) 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x15b3 - 0x1017)' 00:28:15.901 Found 0000:af:00.1 (0x15b3 - 0x1017) 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: mlx_0_0' 00:28:15.901 Found net devices under 0000:af:00.0: mlx_0_0 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: mlx_0_1' 00:28:15.901 Found net devices under 0000:af:00.1: mlx_0_1 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # is_hw=yes 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@420 -- # rdma_device_init 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # uname 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@63 -- # modprobe ib_core 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:28:15.901 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:15.901 link/ether b8:59:9f:af:fd:68 brd ff:ff:ff:ff:ff:ff 00:28:15.901 altname enp175s0f0np0 00:28:15.901 altname ens801f0np0 00:28:15.901 inet 192.168.100.8/24 scope global mlx_0_0 00:28:15.901 valid_lft forever preferred_lft forever 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:28:15.901 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:15.901 link/ether b8:59:9f:af:fd:69 brd ff:ff:ff:ff:ff:ff 00:28:15.901 altname enp175s0f1np1 00:28:15.901 altname ens801f1np1 00:28:15.901 inet 192.168.100.9/24 scope global mlx_0_1 00:28:15.901 valid_lft forever preferred_lft forever 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # return 0 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:28:15.901 192.168.100.9' 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:28:15.901 192.168.100.9' 00:28:15.901 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # head -n 1 00:28:15.902 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:15.902 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:28:15.902 192.168.100.9' 00:28:15.902 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # tail -n +2 00:28:15.902 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # head -n 1 00:28:15.902 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:15.902 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:28:15.902 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:15.902 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:28:15.902 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:28:15.902 19:19:07 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:28:15.902 19:19:07 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:28:15.902 19:19:07 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:28:15.902 19:19:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:15.902 19:19:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:15.902 19:19:07 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:28:15.902 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:28:15.902 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:28:15.902 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:28:15.902 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:28:15.902 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:28:15.902 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:28:15.902 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:15.902 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:28:15.902 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:28:15.902 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:28:15.902 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:15.902 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:28:15.902 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:28:15.902 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:15.902 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:28:15.902 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:28:15.902 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:28:15.902 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:15.902 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:15.902 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:28:15.902 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:28:15.902 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:28:15.902 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:28:15.902 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:15.902 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:28:15.902 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:28:15.902 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:28:15.902 ' 00:28:17.806 [2024-07-25 19:19:09.969051] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11c2df0/0x11d7900) succeed. 00:28:17.806 [2024-07-25 19:19:09.978708] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11c4250/0x1242940) succeed. 00:28:19.184 [2024-07-25 19:19:11.340439] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:28:21.719 [2024-07-25 19:19:13.768297] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:28:23.623 [2024-07-25 19:19:15.883338] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:28:25.527 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:28:25.527 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:28:25.527 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:28:25.527 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:28:25.527 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:28:25.527 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:28:25.527 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:28:25.527 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:25.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:28:25.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:28:25.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:28:25.527 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:25.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:28:25.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:28:25.527 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:25.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:28:25.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:28:25.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:28:25.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:25.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:25.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:28:25.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:28:25.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:28:25.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:28:25.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:25.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:28:25.527 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:28:25.527 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:28:25.527 19:19:17 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:28:25.527 19:19:17 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:25.527 19:19:17 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:25.527 19:19:17 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:28:25.527 19:19:17 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:25.527 19:19:17 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:25.527 19:19:17 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:28:25.527 19:19:17 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:28:25.786 19:19:18 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:28:25.786 19:19:18 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:28:25.786 19:19:18 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:28:25.786 19:19:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:25.786 19:19:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:25.786 19:19:18 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:28:25.786 19:19:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:25.786 19:19:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:25.786 19:19:18 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:28:25.786 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:28:25.786 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:25.786 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:28:25.786 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:28:25.786 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:28:25.786 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:28:25.786 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:25.786 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:28:25.786 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:28:25.786 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:28:25.786 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:28:25.786 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:28:25.786 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:28:25.786 ' 00:28:31.058 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:28:31.058 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:28:31.058 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:31.058 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:28:31.059 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:28:31.059 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:28:31.059 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:28:31.059 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:31.059 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:28:31.059 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:28:31.059 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:28:31.059 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:28:31.059 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:28:31.059 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:28:31.318 19:19:23 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:28:31.318 19:19:23 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:31.318 19:19:23 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:31.318 19:19:23 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 919437 00:28:31.318 19:19:23 spdkcli_nvmf_rdma -- common/autotest_common.sh@950 -- # '[' -z 919437 ']' 00:28:31.318 19:19:23 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # kill -0 919437 00:28:31.318 19:19:23 spdkcli_nvmf_rdma -- common/autotest_common.sh@955 -- # uname 00:28:31.318 19:19:23 spdkcli_nvmf_rdma -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:31.318 19:19:23 spdkcli_nvmf_rdma -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 919437 00:28:31.318 19:19:23 spdkcli_nvmf_rdma -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:31.318 19:19:23 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:31.318 19:19:23 spdkcli_nvmf_rdma -- common/autotest_common.sh@968 -- # echo 'killing process with pid 919437' 00:28:31.318 killing process with pid 919437 00:28:31.318 19:19:23 spdkcli_nvmf_rdma -- common/autotest_common.sh@969 -- # kill 919437 00:28:31.318 19:19:23 spdkcli_nvmf_rdma -- common/autotest_common.sh@974 -- # wait 919437 00:28:31.577 19:19:23 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:28:31.577 19:19:23 spdkcli_nvmf_rdma -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:31.577 19:19:23 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # sync 00:28:31.577 19:19:23 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:28:31.577 19:19:23 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:28:31.577 19:19:23 spdkcli_nvmf_rdma -- nvmf/common.sh@120 -- # set +e 00:28:31.577 19:19:23 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:31.577 19:19:23 spdkcli_nvmf_rdma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:28:31.577 rmmod nvme_rdma 00:28:31.577 rmmod nvme_fabrics 00:28:31.577 19:19:23 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:31.577 19:19:23 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set -e 00:28:31.577 19:19:23 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # return 0 00:28:31.577 19:19:23 spdkcli_nvmf_rdma -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:31.577 19:19:23 spdkcli_nvmf_rdma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:31.577 19:19:23 spdkcli_nvmf_rdma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:28:31.577 00:28:31.577 real 0m23.298s 00:28:31.577 user 0m51.626s 00:28:31.577 sys 0m5.029s 00:28:31.577 19:19:23 spdkcli_nvmf_rdma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:31.577 19:19:23 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:31.577 ************************************ 00:28:31.577 END TEST spdkcli_nvmf_rdma 00:28:31.577 ************************************ 00:28:31.577 19:19:24 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:28:31.577 19:19:24 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:28:31.577 19:19:24 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:28:31.577 19:19:24 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:28:31.577 19:19:24 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:28:31.577 19:19:24 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:28:31.577 19:19:24 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:28:31.577 19:19:24 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:28:31.577 19:19:24 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:28:31.577 19:19:24 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:28:31.577 19:19:24 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:28:31.577 19:19:24 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:28:31.577 19:19:24 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:28:31.577 19:19:24 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:28:31.577 19:19:24 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:28:31.577 19:19:24 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:28:31.577 19:19:24 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:28:31.577 19:19:24 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:31.577 19:19:24 -- common/autotest_common.sh@10 -- # set +x 00:28:31.577 19:19:24 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:28:31.577 19:19:24 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:28:31.577 19:19:24 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:28:31.577 19:19:24 -- common/autotest_common.sh@10 -- # set +x 00:28:36.850 INFO: APP EXITING 00:28:36.850 INFO: killing all VMs 00:28:36.850 INFO: killing vhost app 00:28:36.850 INFO: EXIT DONE 00:28:39.384 Waiting for block devices as requested 00:28:39.384 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:28:39.384 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:39.384 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:39.384 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:39.384 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:39.384 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:39.643 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:39.643 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:39.643 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:39.903 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:39.903 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:39.903 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:39.903 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:40.162 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:40.162 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:40.162 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:40.421 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:43.704 Cleaning 00:28:43.704 Removing: /var/run/dpdk/spdk0/config 00:28:43.704 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:43.704 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:43.704 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:43.704 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:43.704 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:28:43.704 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:28:43.704 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:28:43.704 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:28:43.704 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:43.704 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:43.704 Removing: /var/run/dpdk/spdk1/config 00:28:43.704 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:28:43.704 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:28:43.704 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:28:43.704 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:28:43.704 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:28:43.704 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:28:43.704 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:28:43.704 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:28:43.704 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:28:43.704 Removing: /var/run/dpdk/spdk1/hugepage_info 00:28:43.704 Removing: /var/run/dpdk/spdk1/mp_socket 00:28:43.704 Removing: /var/run/dpdk/spdk2/config 00:28:43.704 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:28:43.704 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:28:43.704 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:28:43.704 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:28:43.704 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:28:43.704 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:28:43.704 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:28:43.704 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:28:43.704 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:28:43.704 Removing: /var/run/dpdk/spdk2/hugepage_info 00:28:43.704 Removing: /var/run/dpdk/spdk3/config 00:28:43.704 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:28:43.704 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:28:43.704 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:28:43.704 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:28:43.704 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:28:43.704 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:28:43.704 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:28:43.704 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:28:43.705 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:28:43.705 Removing: /var/run/dpdk/spdk3/hugepage_info 00:28:43.705 Removing: /var/run/dpdk/spdk4/config 00:28:43.705 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:28:43.705 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:28:43.705 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:28:43.705 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:28:43.705 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:28:43.705 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:28:43.705 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:28:43.705 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:28:43.705 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:28:43.705 Removing: /var/run/dpdk/spdk4/hugepage_info 00:28:43.705 Removing: /dev/shm/bdevperf_trace.pid647706 00:28:43.705 Removing: /dev/shm/bdevperf_trace.pid836273 00:28:43.705 Removing: /dev/shm/bdev_svc_trace.1 00:28:43.705 Removing: /dev/shm/nvmf_trace.0 00:28:43.705 Removing: /dev/shm/spdk_tgt_trace.pid604431 00:28:43.705 Removing: /var/run/dpdk/spdk0 00:28:43.705 Removing: /var/run/dpdk/spdk1 00:28:43.705 Removing: /var/run/dpdk/spdk2 00:28:43.705 Removing: /var/run/dpdk/spdk3 00:28:43.705 Removing: /var/run/dpdk/spdk4 00:28:43.705 Removing: /var/run/dpdk/spdk_pid602270 00:28:43.705 Removing: /var/run/dpdk/spdk_pid603343 00:28:43.705 Removing: /var/run/dpdk/spdk_pid604431 00:28:43.705 Removing: /var/run/dpdk/spdk_pid605073 00:28:43.705 Removing: /var/run/dpdk/spdk_pid606026 00:28:43.705 Removing: /var/run/dpdk/spdk_pid606278 00:28:43.705 Removing: /var/run/dpdk/spdk_pid607257 00:28:43.705 Removing: /var/run/dpdk/spdk_pid607493 00:28:43.705 Removing: /var/run/dpdk/spdk_pid607686 00:28:43.705 Removing: /var/run/dpdk/spdk_pid612404 00:28:43.705 Removing: /var/run/dpdk/spdk_pid613690 00:28:43.705 Removing: /var/run/dpdk/spdk_pid613971 00:28:43.705 Removing: /var/run/dpdk/spdk_pid614263 00:28:43.705 Removing: /var/run/dpdk/spdk_pid614570 00:28:43.705 Removing: /var/run/dpdk/spdk_pid614924 00:28:43.705 Removing: /var/run/dpdk/spdk_pid615131 00:28:43.705 Removing: /var/run/dpdk/spdk_pid615375 00:28:43.705 Removing: /var/run/dpdk/spdk_pid615653 00:28:43.705 Removing: /var/run/dpdk/spdk_pid616629 00:28:43.705 Removing: /var/run/dpdk/spdk_pid619653 00:28:43.705 Removing: /var/run/dpdk/spdk_pid619922 00:28:43.705 Removing: /var/run/dpdk/spdk_pid620188 00:28:43.705 Removing: /var/run/dpdk/spdk_pid620419 00:28:43.705 Removing: /var/run/dpdk/spdk_pid620816 00:28:43.705 Removing: /var/run/dpdk/spdk_pid620931 00:28:43.705 Removing: /var/run/dpdk/spdk_pid621432 00:28:43.705 Removing: /var/run/dpdk/spdk_pid621663 00:28:43.705 Removing: /var/run/dpdk/spdk_pid621924 00:28:43.705 Removing: /var/run/dpdk/spdk_pid621968 00:28:43.705 Removing: /var/run/dpdk/spdk_pid622201 00:28:43.705 Removing: /var/run/dpdk/spdk_pid622436 00:28:43.705 Removing: /var/run/dpdk/spdk_pid622892 00:28:43.705 Removing: /var/run/dpdk/spdk_pid623089 00:28:43.705 Removing: /var/run/dpdk/spdk_pid623389 00:28:43.705 Removing: /var/run/dpdk/spdk_pid627283 00:28:43.705 Removing: /var/run/dpdk/spdk_pid631347 00:28:43.705 Removing: /var/run/dpdk/spdk_pid642157 00:28:43.705 Removing: /var/run/dpdk/spdk_pid642933 00:28:43.705 Removing: /var/run/dpdk/spdk_pid647706 00:28:43.705 Removing: /var/run/dpdk/spdk_pid647966 00:28:43.705 Removing: /var/run/dpdk/spdk_pid652062 00:28:43.705 Removing: /var/run/dpdk/spdk_pid657936 00:28:43.705 Removing: /var/run/dpdk/spdk_pid660647 00:28:43.705 Removing: /var/run/dpdk/spdk_pid670723 00:28:43.705 Removing: /var/run/dpdk/spdk_pid696874 00:28:43.705 Removing: /var/run/dpdk/spdk_pid700506 00:28:43.705 Removing: /var/run/dpdk/spdk_pid750739 00:28:43.705 Removing: /var/run/dpdk/spdk_pid755792 00:28:43.705 Removing: /var/run/dpdk/spdk_pid761842 00:28:43.705 Removing: /var/run/dpdk/spdk_pid770710 00:28:43.705 Removing: /var/run/dpdk/spdk_pid834310 00:28:43.705 Removing: /var/run/dpdk/spdk_pid835172 00:28:43.705 Removing: /var/run/dpdk/spdk_pid836273 00:28:43.705 Removing: /var/run/dpdk/spdk_pid840448 00:28:43.705 Removing: /var/run/dpdk/spdk_pid847233 00:28:43.705 Removing: /var/run/dpdk/spdk_pid848094 00:28:43.705 Removing: /var/run/dpdk/spdk_pid849012 00:28:43.705 Removing: /var/run/dpdk/spdk_pid849940 00:28:43.705 Removing: /var/run/dpdk/spdk_pid850397 00:28:43.705 Removing: /var/run/dpdk/spdk_pid854689 00:28:43.705 Removing: /var/run/dpdk/spdk_pid854691 00:28:43.705 Removing: /var/run/dpdk/spdk_pid859136 00:28:43.705 Removing: /var/run/dpdk/spdk_pid859817 00:28:43.705 Removing: /var/run/dpdk/spdk_pid860287 00:28:43.705 Removing: /var/run/dpdk/spdk_pid861373 00:28:43.705 Removing: /var/run/dpdk/spdk_pid861450 00:28:43.705 Removing: /var/run/dpdk/spdk_pid866038 00:28:43.705 Removing: /var/run/dpdk/spdk_pid866506 00:28:43.705 Removing: /var/run/dpdk/spdk_pid870734 00:28:43.705 Removing: /var/run/dpdk/spdk_pid873438 00:28:43.705 Removing: /var/run/dpdk/spdk_pid878957 00:28:43.705 Removing: /var/run/dpdk/spdk_pid888969 00:28:43.705 Removing: /var/run/dpdk/spdk_pid888971 00:28:43.705 Removing: /var/run/dpdk/spdk_pid907588 00:28:43.705 Removing: /var/run/dpdk/spdk_pid907867 00:28:43.705 Removing: /var/run/dpdk/spdk_pid914104 00:28:43.705 Removing: /var/run/dpdk/spdk_pid914401 00:28:43.705 Removing: /var/run/dpdk/spdk_pid916694 00:28:43.705 Removing: /var/run/dpdk/spdk_pid919437 00:28:43.705 Clean 00:28:43.705 19:19:36 -- common/autotest_common.sh@1451 -- # return 0 00:28:43.705 19:19:36 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:28:43.705 19:19:36 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:43.705 19:19:36 -- common/autotest_common.sh@10 -- # set +x 00:28:43.964 19:19:36 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:28:43.964 19:19:36 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:43.964 19:19:36 -- common/autotest_common.sh@10 -- # set +x 00:28:43.964 19:19:36 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:28:43.964 19:19:36 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:28:43.964 19:19:36 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:28:43.964 19:19:36 -- spdk/autotest.sh@395 -- # hash lcov 00:28:43.964 19:19:36 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:28:43.964 19:19:36 -- spdk/autotest.sh@397 -- # hostname 00:28:43.964 19:19:36 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-09 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:28:43.964 geninfo: WARNING: invalid characters removed from testname! 00:29:05.921 19:19:55 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:29:05.921 19:19:57 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:29:07.297 19:19:59 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:29:09.200 19:20:01 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:29:11.106 19:20:03 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:29:12.480 19:20:04 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:29:14.383 19:20:06 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:14.383 19:20:06 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:14.383 19:20:06 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:14.383 19:20:06 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:14.383 19:20:06 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:14.383 19:20:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.383 19:20:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.383 19:20:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.383 19:20:06 -- paths/export.sh@5 -- $ export PATH 00:29:14.383 19:20:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.383 19:20:06 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:29:14.383 19:20:06 -- common/autobuild_common.sh@447 -- $ date +%s 00:29:14.383 19:20:06 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721928006.XXXXXX 00:29:14.383 19:20:06 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721928006.QJBcWb 00:29:14.383 19:20:06 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:29:14.383 19:20:06 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:29:14.383 19:20:06 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:29:14.383 19:20:06 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:29:14.383 19:20:06 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:29:14.383 19:20:06 -- common/autobuild_common.sh@463 -- $ get_config_params 00:29:14.383 19:20:06 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:29:14.383 19:20:06 -- common/autotest_common.sh@10 -- $ set +x 00:29:14.383 19:20:06 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:29:14.383 19:20:06 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:29:14.383 19:20:06 -- pm/common@17 -- $ local monitor 00:29:14.383 19:20:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:14.383 19:20:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:14.383 19:20:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:14.383 19:20:06 -- pm/common@21 -- $ date +%s 00:29:14.383 19:20:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:14.383 19:20:06 -- pm/common@21 -- $ date +%s 00:29:14.383 19:20:06 -- pm/common@25 -- $ sleep 1 00:29:14.383 19:20:06 -- pm/common@21 -- $ date +%s 00:29:14.383 19:20:06 -- pm/common@21 -- $ date +%s 00:29:14.383 19:20:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721928006 00:29:14.383 19:20:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721928006 00:29:14.384 19:20:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721928006 00:29:14.384 19:20:06 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721928006 00:29:14.384 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721928006_collect-vmstat.pm.log 00:29:14.384 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721928006_collect-cpu-load.pm.log 00:29:14.384 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721928006_collect-cpu-temp.pm.log 00:29:14.384 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721928006_collect-bmc-pm.bmc.pm.log 00:29:15.321 19:20:07 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:29:15.321 19:20:07 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:29:15.321 19:20:07 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:29:15.321 19:20:07 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:29:15.321 19:20:07 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:29:15.321 19:20:07 -- spdk/autopackage.sh@19 -- $ timing_finish 00:29:15.321 19:20:07 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:15.321 19:20:07 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:29:15.321 19:20:07 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:29:15.321 19:20:07 -- spdk/autopackage.sh@20 -- $ exit 0 00:29:15.321 19:20:07 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:15.321 19:20:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:29:15.321 19:20:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:29:15.321 19:20:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:15.321 19:20:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:29:15.321 19:20:07 -- pm/common@44 -- $ pid=934191 00:29:15.321 19:20:07 -- pm/common@50 -- $ kill -TERM 934191 00:29:15.321 19:20:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:15.321 19:20:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:29:15.321 19:20:07 -- pm/common@44 -- $ pid=934193 00:29:15.321 19:20:07 -- pm/common@50 -- $ kill -TERM 934193 00:29:15.321 19:20:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:15.321 19:20:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:29:15.321 19:20:07 -- pm/common@44 -- $ pid=934195 00:29:15.321 19:20:07 -- pm/common@50 -- $ kill -TERM 934195 00:29:15.321 19:20:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:15.321 19:20:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:29:15.321 19:20:07 -- pm/common@44 -- $ pid=934219 00:29:15.321 19:20:07 -- pm/common@50 -- $ sudo -E kill -TERM 934219 00:29:15.321 + [[ -n 496895 ]] 00:29:15.321 + sudo kill 496895 00:29:15.332 [Pipeline] } 00:29:15.351 [Pipeline] // stage 00:29:15.357 [Pipeline] } 00:29:15.378 [Pipeline] // timeout 00:29:15.384 [Pipeline] } 00:29:15.403 [Pipeline] // catchError 00:29:15.409 [Pipeline] } 00:29:15.430 [Pipeline] // wrap 00:29:15.438 [Pipeline] } 00:29:15.453 [Pipeline] // catchError 00:29:15.463 [Pipeline] stage 00:29:15.466 [Pipeline] { (Epilogue) 00:29:15.481 [Pipeline] catchError 00:29:15.483 [Pipeline] { 00:29:15.497 [Pipeline] echo 00:29:15.499 Cleanup processes 00:29:15.505 [Pipeline] sh 00:29:15.792 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:29:15.792 934303 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/sdr.cache 00:29:15.792 934592 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:29:15.808 [Pipeline] sh 00:29:16.094 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:29:16.094 ++ grep -v 'sudo pgrep' 00:29:16.094 ++ awk '{print $1}' 00:29:16.094 + sudo kill -9 934303 00:29:16.104 [Pipeline] sh 00:29:16.383 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:24.506 [Pipeline] sh 00:29:24.793 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:24.793 Artifacts sizes are good 00:29:24.808 [Pipeline] archiveArtifacts 00:29:24.815 Archiving artifacts 00:29:24.960 [Pipeline] sh 00:29:25.248 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-phy-autotest 00:29:25.264 [Pipeline] cleanWs 00:29:25.274 [WS-CLEANUP] Deleting project workspace... 00:29:25.274 [WS-CLEANUP] Deferred wipeout is used... 00:29:25.281 [WS-CLEANUP] done 00:29:25.283 [Pipeline] } 00:29:25.306 [Pipeline] // catchError 00:29:25.319 [Pipeline] sh 00:29:25.603 + logger -p user.info -t JENKINS-CI 00:29:25.613 [Pipeline] } 00:29:25.631 [Pipeline] // stage 00:29:25.637 [Pipeline] } 00:29:25.654 [Pipeline] // node 00:29:25.660 [Pipeline] End of Pipeline 00:29:25.700 Finished: SUCCESS